<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:dc="http://purl.org/dc/elements/1.1/" version="2.0">
  <channel>
    <title>Repository Collection: null</title>
    <link>https://scholar.dgist.ac.kr/handle/20.500.11750/15653</link>
    <description />
    <pubDate>Sat, 04 Apr 2026 11:17:40 GMT</pubDate>
    <dc:date>2026-04-04T11:17:40Z</dc:date>
    <item>
      <title>Noise-Resilient Masked Face Detection Using Quantized DnCNN and YOLO</title>
      <link>https://scholar.dgist.ac.kr/handle/20.500.11750/59379</link>
      <description>Title: Noise-Resilient Masked Face Detection Using Quantized DnCNN and YOLO
Author(s): Choi, Rockhyun; Lee, Hyunki; Kim, Bong-Seok; Kim, Sangdong; Kim, Min Young
Abstract: This study presents a noise-resilient masked-face detection framework optimized for the NVIDIA Jetson AGX Orin, which improves detection precision by approximately 30% under severe Gaussian noise (variance 0.10) while reducing denoising latency by over 42% and increasing end-to-end throughput by more than 30%. The proposed system integrates a lightweight DnCNN-based denoising stage with the YOLOv11 detector, employing Quantize-Dequantize (QDQ)-based INT8 post-training quantization and a parallel CPU-GPU execution pipeline to maximize edge efficiency. The experimental results demonstrate that denoising preprocessing substantially restores detection accuracy under low signal quality. Furthermore, comparative evaluations confirm that 8-bit quantization achieves a favorable accuracy-efficiency trade-off with only minor precision degradation relative to 16-bit inference, proving the framework&amp;apos;s robustness and practicality for real-time, resource-constrained edge AI applications.</description>
      <pubDate>Sun, 30 Nov 2025 15:00:00 GMT</pubDate>
      <guid isPermaLink="false">https://scholar.dgist.ac.kr/handle/20.500.11750/59379</guid>
      <dc:date>2025-11-30T15:00:00Z</dc:date>
    </item>
    <item>
      <title>Planar Marker Recognition based AMR Localization and Docking Method for Multi Robot Cooperation in Indoor Factory Construction</title>
      <link>https://scholar.dgist.ac.kr/handle/20.500.11750/59378</link>
      <description>Title: Planar Marker Recognition based AMR Localization and Docking Method for Multi Robot Cooperation in Indoor Factory Construction
Author(s): Lee, Seung Jun; Lee, Hyunki; Kim, Min Young
Abstract: Accurate position estimation is critical for the reliable navigation of autonomous mobile robots (AMRs) in indoor construction environments. Conventional methods, including landmark detection using rangefinders, visionbased optical sensors, and global positioning system (GPS)- based localization, encounter limitations in large, clutterfree indoor industrial spaces. Similarly, light detection and ranging (LiDAR)-based simultaneous localization and mapping (SLAM) requires premap construction, which is time-consuming and impractical for dynamic construction sites. To address these challenges, this article proposes a planar marker-based position estimation system that enables immediate deployment without premapping, optimized for multirobot collaboration in indoor construction environments. The proposed system employs 3-D marker recognition with minimal setup, using markers placed on both the robot and a designated home position. The interrobot communication enables relative position estimation and coordinate sharing, while accumulated odometry errors are periodically reset using the home marker to minimize positional drift. The experimental validation demonstrates position errors below 100 mm over a 20-m travel distance, with standard deviations of ±2.5 and ±2.0 mm in the X- and Y-axes, respectively, and an angular error of ±0.1° during docking. These results confirm that the proposed method achieves accurate trajectory tracking and rapid environmental adaptability, significantly enhancing the efficiency and robustness of collaborative AMR operations in large-scale indoor construction sites.</description>
      <pubDate>Wed, 31 Dec 2025 15:00:00 GMT</pubDate>
      <guid isPermaLink="false">https://scholar.dgist.ac.kr/handle/20.500.11750/59378</guid>
      <dc:date>2025-12-31T15:00:00Z</dc:date>
    </item>
    <item>
      <title>스마트 수술실 구축을 위한 환자 모니터링 핵심 기반 기술 연구</title>
      <link>https://scholar.dgist.ac.kr/handle/20.500.11750/58500</link>
      <description>Title: 스마트 수술실 구축을 위한 환자 모니터링 핵심 기반 기술 연구
Author(s): 구본근; 윤현수; 이현기; 장민강; 하호건; 구교권; 김현지
Abstract: 지난 10년간 한국에서 진행된 수술의 건수는 꾸준히 증가하는 추세이다. 수술의 건수가 증가함에 따라 의료사고로 인한 분쟁 또한 늘어나고 있으며, 이에 대한 해결책으로 스마트 수술실이 주목받고 있다. 본 논문에서는 수술실에서 일어날 수 있는 사고 중 욕창 발생에 대한 예방 시스템을 제안한다. 접촉압력이 욕창의 가장 주요한 원인인 만큼, 우리 연구에서는 접촉압력을 감지하기 위한 프레임워크를 2가지 제시한다. 먼저 압력 감지 패드를 사용하여 환자의 상태를 얻고 초고해상도 기술을 통해 압력이 가해지는 위치를 찾아낸다. 또한 광혈량도를 통해 환자의 상태를 모니터링하고, 이를 미세 조정과 데이터 증강을 사용하는 인공지능 알고리즘을 이용해 비정상적으로 큰 압력을 받는 곳을 찾아내는 기술을 제시한다. 제안된 두 프레임워크는 각각 입력에 비해 40dB 개선, 96%의 정확도를 달성하였으며, 이는 스마트 수술실에 인공지능을 결합하여 환자 모니터링을 개선할 수 있음을 나타낸다.
Over the past decade, the number of surgeries performed in South Korea has steadily increased. Along with this rise, disputes stemming from medical accidents have also grown, bringing smart operating rooms into the spotlight as a potential solution. This paper proposes a prevention system targeting the occurrence of pressure ulcers, one of the accidents that may arise in operating rooms. Since contact pressure is the primary cause of pressure ulcers, our study presents two frameworks for detecting contact pressure. First, we utilize pressure-sensing pads to obtain the patient’s condition and pinpoint the areas under pressure using ultra-high-resolution technology. Second, we propose a method to monitor the patient’s condition through photoplethysmography (PPG) and detect areas experiencing abnormally high pressure using an artificial intelligence algorithm that leverages fine-tuning and data augmentation techniques. The proposed frameworks achieved a 40 dB improvement relative to the input and an accuracy of 96%, respectively, demonstrating the potential to enhance patient monitoring by integrating artificial intelligence into smart operating rooms.</description>
      <pubDate>Mon, 31 Mar 2025 15:00:00 GMT</pubDate>
      <guid isPermaLink="false">https://scholar.dgist.ac.kr/handle/20.500.11750/58500</guid>
      <dc:date>2025-03-31T15:00:00Z</dc:date>
    </item>
    <item>
      <title>Video domain adaptation for semantic segmentation using perceptual consistency matching</title>
      <link>https://scholar.dgist.ac.kr/handle/20.500.11750/57336</link>
      <description>Title: Video domain adaptation for semantic segmentation using perceptual consistency matching
Author(s): Ullah, Ihsan; An, Sion; Kang, Myeongkyun; Chikontwe, Philip; Lee, HyunKi; Choi, Jinwoo; Park, Sang Hyun
Abstract: Unsupervised domain adaptation (UDA) aims to transfer knowledge in previous and related labeled datasets (sources) to a new unlabeled dataset (target). Despite the impressive performance, existing approaches have largely focused on image-based UDA only, and video-based UDA has been relatively understudied and received less attention due to the difficulty of adapting diverse modal video features and modeling temporal associations efficiently. To address this, existing studies use optical flow to capture motion cues between in-domain consecutive frames, but is limited by heavy compute requirements and modeling flow patterns across diverse domains is equally challenging. In this work, we propose an adversarial domain adaptation approach for video semantic segmentation that aims to align temporally associated pixels in successive source and target domain frames without relying on optical flow. Specifically, we introduce a Perceptual Consistency Matching (PCM) strategy that leverages perceptual similarity to identify pixels with high correlation across consecutive frames, and infer that such pixels should correspond to the same class. Therefore, we can enhance prediction accuracy for video-UDA by enforcing consistency not only between in-domain frames, but across domains using PCM objectives during model training. Extensive experiments on public datasets show the benefit of our approach over existing state-of-the-art UDA methods. Our approach not only addresses a crucial task in video domain adaptation but also offers notable improvements in performance with faster inference times. © 2024 Elsevier Ltd</description>
      <pubDate>Thu, 31 Oct 2024 15:00:00 GMT</pubDate>
      <guid isPermaLink="false">https://scholar.dgist.ac.kr/handle/20.500.11750/57336</guid>
      <dc:date>2024-10-31T15:00:00Z</dc:date>
    </item>
  </channel>
</rss>

