<?xml version="1.0" encoding="UTF-8"?>
<rdf:RDF xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns="http://purl.org/rss/1.0/" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel rdf:about="https://scholar.dgist.ac.kr/handle/20.500.11750/167">
    <title>Repository Collection: null</title>
    <link>https://scholar.dgist.ac.kr/handle/20.500.11750/167</link>
    <description />
    <items>
      <rdf:Seq>
        <rdf:li rdf:resource="https://scholar.dgist.ac.kr/handle/20.500.11750/59875" />
        <rdf:li rdf:resource="https://scholar.dgist.ac.kr/handle/20.500.11750/57429" />
        <rdf:li rdf:resource="https://scholar.dgist.ac.kr/handle/20.500.11750/57286" />
        <rdf:li rdf:resource="https://scholar.dgist.ac.kr/handle/20.500.11750/57166" />
      </rdf:Seq>
    </items>
    <dc:date>2026-04-04T12:38:10Z</dc:date>
  </channel>
  <item rdf:about="https://scholar.dgist.ac.kr/handle/20.500.11750/59875">
    <title>핸드-아이 캘리브레이션의 원리와 의료 증강 현실에 대한 적용</title>
    <link>https://scholar.dgist.ac.kr/handle/20.500.11750/59875</link>
    <description>Title: 핸드-아이 캘리브레이션의 원리와 의료 증강 현실에 대한 적용
Author(s): 이성풍; 홍재성
Abstract: To implement accurate AR navigation, real-time AR-core transformation from the coordinate system of the camera to that of a virtual object is necessary. If feature points in the camera image and virtual object are detected robustly and paired suitably, the direct calculation of the AR-core transformation is feasible for real-time update. However, in endoscopic or microscopic surgery, determining the AR-core transformation requires feature points within the patient’s body. It is challenging and time-consuming to determine robust feature points owing to constraints such as narrow working space, featureless organ structures, and organ deformation. Consequently, direct calculation can retrieve unreliable results. Thus, a position sensor has been commonly utilized for updating the AR-core transformation indirectly in real time given its accuracy, measurement speed, and reliability. Optical trackers are widely used when implementing AR for surgical applications to compensate for camera or patient movement in real time. A rigid transformation between the image and camera frames is necessary and should be updated for real-time AR. For this purpose, two preoperative steps are essential: image-to-patient registration to represents the transformation between the image and patient frames, and hand–eye calibration to represents the transformation between the camera frame and the frame of an optical tracker marker attached to the camera. We have found an optimal solution to acquire these two transformations in one-step, achieving the high accuracy in AR display.</description>
    <dc:date>2025-10-31T15:00:00Z</dc:date>
  </item>
  <item rdf:about="https://scholar.dgist.ac.kr/handle/20.500.11750/57429">
    <title>Flexible endoscope manipulating robot using quad-roller friction mechanism</title>
    <link>https://scholar.dgist.ac.kr/handle/20.500.11750/57429</link>
    <description>Title: Flexible endoscope manipulating robot using quad-roller friction mechanism
Author(s): Lee, Subin; Kim, Hyeonwook; Byeon, Jaehyeon; Shim, Seongbo; Lee, Hyun-Joo; Hong, Jaesung
Abstract: A robotic system for manipulating a flexible endoscope in surgery can provide enhanced accuracy and usability compared to manual operation. However, previous studies require large-scale, complex hardware systems to implement the rotational and translational motions of the soft endoscope cable. The conventional control of the endoscope by actuating the endoscope handle also leads to undesired slack between the endoscope tip and the handle, which becomes more problematic with long endoscopes such as a colonoscope. This study proposes a compact quad-roller friction mechanism that enables rotational and translational motions triggered not from the endoscope handle but at the endoscope tip. Controlling two pairs of tilted rollers achieves both types of motion within a small space. The proposed system also introduces an unsynchronized motion strategy between the handle and tip parts to minimize the robot’s motion near the patient by employing the slack positively as a control index. Experiments indicate that the proposed system achieves accurate rotational and translational motions, and the unsynchronized control method reduces the total translational motion by up to 88% compared to the previous method. © 2024 The Author(s). Published by Informa UK Limited, trading as Taylor &amp; Francis Group.</description>
    <dc:date>2024-11-30T15:00:00Z</dc:date>
  </item>
  <item rdf:about="https://scholar.dgist.ac.kr/handle/20.500.11750/57286">
    <title>Target-specified reference-based deep learning network for joint image deblurring and resolution enhancement in surgical zoom lens camera calibration</title>
    <link>https://scholar.dgist.ac.kr/handle/20.500.11750/57286</link>
    <description>Title: Target-specified reference-based deep learning network for joint image deblurring and resolution enhancement in surgical zoom lens camera calibration
Author(s): Ha, Ho-Gun; Jeung, Deokgi; Ullah, Ihsan; Tokuda, Junichi; Hong, Jaesung; Lee, Hyunki
Abstract: Background and objective: For the augmented reality of surgical navigation, which overlays a 3D model of the surgical target on an image, accurate camera calibration is imperative. However, when the checkerboard images for calibration are captured using a surgical microscope having high magnification, blur owing to the narrow depth of focus and blocking artifacts caused by limited resolution around the fine edges occur. These artifacts strongly affect the localization of corner points of the checkerboard in these images, resulting in inaccurate calibration, which leads to a large displacement in augmented reality. To solve this problem, in this study, we proposed a novel target-specific deep learning network that simultaneously enhances both the blur and spatial resolution of an image for surgical zoom lens camera calibration. Methods: As a scheme of an end-to-end convolutional deep neural network, the proposed network is specifically intended for the checkerboard image enhancement used in camera calibration. Through the symmetric architecture of the network, which consists of encoding and decoding layers, the distinctive spatial features of the encoding layers are transferred and merged with the output of the decoding layers. Additionally, by integrating a multi-frame framework including subpixel motion estimation and ideal reference image with the symmetric architecture, joint image deblurring and enhanced resolution were efficiently achieved. Results: From experimental comparisons, we verified the capability of the proposed method to improve the subjective and objective performances of surgical microscope calibration. Furthermore, we confirmed that the augmented reality overlap ratio, which quantitatively indicates augmented reality accuracy, from calibration with the enhanced image of the proposed method is higher than that of the previous methods. Conclusions: These findings suggest that the proposed network provides sharp high-resolution images from blurry low-resolution inputs. Furthermore, we demonstrate superior performance in camera calibration by using surgical microscopic images, thus showing its potential applications in the field of practical surgical navigation. © 2024</description>
    <dc:date>2024-11-30T15:00:00Z</dc:date>
  </item>
  <item rdf:about="https://scholar.dgist.ac.kr/handle/20.500.11750/57166">
    <title>Clinical efficacy and performance evaluation of a bendable remote robot system for a bone tumour surgery: A pilot animal study</title>
    <link>https://scholar.dgist.ac.kr/handle/20.500.11750/57166</link>
    <description>Title: Clinical efficacy and performance evaluation of a bendable remote robot system for a bone tumour surgery: A pilot animal study
Author(s): Kim, Seungmin; Shin, Donghyun; Lee, Changhyeon; Yu, Daehee; Cho, Jongho; Bang, Hyunhee; Lee, Hyunjoo; Kim, Donghyun; Park, Ilhyung; Hong, Jaesung; Joung, Sanghyun
Abstract: BackgroundTraditional open surgery for bone tumours sometimes has as a consequence an excessive removal of healthy bone tissue because of the limitations of rigid surgical instruments, increasing infection risk and recovery time.MethodsWe propose a remote robot with a 4.5-mm diameter bendable end-effector, offering four degrees of freedom for accessing the inside of the bone and performing tumour debridement. The preclinical studies evaluated the effectiveness, clinical scenario, and usability across 12 total surgeries-six phantom surgeries and six bovine bone surgeries. Evaluation criteria included skin incision size, bone window size, surgical time, removal rate, and conversion to open surgery.ResultsPreclinical studies demonstrated that the robotic approach requires significantly smaller incision size and procedure times than traditional open curettage.ConclusionThis study validated the performance of the proposed system by assessing its preclinical effectiveness and optimising surgical methods using human phantom and bovine bone tumour models.</description>
    <dc:date>2024-07-31T15:00:00Z</dc:date>
  </item>
</rdf:RDF>

