<?xml version="1.0" encoding="UTF-8"?>
<rdf:RDF xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns="http://purl.org/rss/1.0/" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel rdf:about="https://scholar.dgist.ac.kr/handle/20.500.11750/12135">
    <title>Repository Community: null</title>
    <link>https://scholar.dgist.ac.kr/handle/20.500.11750/12135</link>
    <description />
    <items>
      <rdf:Seq>
        <rdf:li rdf:resource="https://scholar.dgist.ac.kr/handle/20.500.11750/60002" />
        <rdf:li rdf:resource="https://scholar.dgist.ac.kr/handle/20.500.11750/59362" />
        <rdf:li rdf:resource="https://scholar.dgist.ac.kr/handle/20.500.11750/59250" />
        <rdf:li rdf:resource="https://scholar.dgist.ac.kr/handle/20.500.11750/59074" />
      </rdf:Seq>
    </items>
    <dc:date>2026-04-06T07:08:31Z</dc:date>
  </channel>
  <item rdf:about="https://scholar.dgist.ac.kr/handle/20.500.11750/60002">
    <title>Scale-Invariant and View-Relational Representation Learning for Full Surround Monocular Depth</title>
    <link>https://scholar.dgist.ac.kr/handle/20.500.11750/60002</link>
    <description>Title: Scale-Invariant and View-Relational Representation Learning for Full Surround Monocular Depth
Author(s): Hwang, Kyumin; Choi, Wonhyeok; Han, Kiljoon; Choi, Wonjoon; Choi, Minwoo; Na, Yongcheon; Park, Minwoo; Im, Sunghoon
Abstract: Recent foundation models demonstrate strong generalization capabilities in monocular depth estimation. However, directly applying these models to Full Surround Monocular Depth Estimation (FSMDE) presents two major challenges: (1) high computational cost, which limits realtime performance, and (2) difficulty in estimating metricscale depth, as these models are typically trained to predict only relative depth. To address these limitations, we propose a novel knowledge distillation strategy that transfers robust depth knowledge from a foundation model to a lightweight FSMDE network. Our approach leverages a hybrid regression framework combining the knowledge distillation scheme–traditionally used in classification–with a depth binning module to enhance scale consistency. Specifically, we introduce a crossinteraction knowledge distillation scheme that distills the scaleinvariant depth bin probabilities of a foundation model into the student network while guiding it to infer metric-scale depth bin centers from ground-truth depth. Furthermore, we propose view-relational knowledge distillation, which encodes structural relationships among adjacent camera views and transfers them to enhance cross-view depth consistency. Experiments on DDAD and nuScenes demonstrate the effectiveness of our method compared to conventional supervised methods and existing knowledge distillation approaches. Moreover, our method achieves a favorable trade-off between performance and efficiency, meeting real-time requirements. © 2016 IEEE.</description>
    <dc:date>2025-12-31T15:00:00Z</dc:date>
  </item>
  <item rdf:about="https://scholar.dgist.ac.kr/handle/20.500.11750/59362">
    <title>Towards Lossless Implicit Neural Representation via Bit Plane Decomposition</title>
    <link>https://scholar.dgist.ac.kr/handle/20.500.11750/59362</link>
    <description>Title: Towards Lossless Implicit Neural Representation via Bit Plane Decomposition
Author(s): Han, Woo Kyoung; Lee, Byeonghun; Cho, Hyunmin; Im, Sunghoon; Jin, Kyong Hwan
Abstract: We quantify the upper bound on the size of the implicit neural representation (INR) model from a digital perspective. The upper bound of the model size increases exponentially as the required bit-precision increases. To this end, we present a bit-plane decomposition method that makes INR predict bit-planes, producing the same effect as reducing the upper bound of the model size. We validate our hypothesis that reducing the upper bound leads to faster convergence with constant model size. Our method achieves lossless representation in 2D image and audio fitting, even for high bit-depth signals, such as 16-bit, which was previously unachievable. We pioneered the presence of bit bias, which INR prioritizes as the most significant bit (MSB). We expand the application of the INR task to bit depth expansion, lossless image compression, and extreme network quantization. Our source code is available at https: //github.com/WooKyoungHan/LosslessINR.</description>
    <dc:date>2025-06-12T15:00:00Z</dc:date>
  </item>
  <item rdf:about="https://scholar.dgist.ac.kr/handle/20.500.11750/59250">
    <title>회전형 카메라(CCTV) 3차원 화각 추정 프로그램</title>
    <link>https://scholar.dgist.ac.kr/handle/20.500.11750/59250</link>
    <description>Title: 회전형 카메라(CCTV) 3차원 화각 추정 프로그램
Author(s): 최원준; 이진형; 임성훈</description>
  </item>
  <item rdf:about="https://scholar.dgist.ac.kr/handle/20.500.11750/59074">
    <title>도메인 적응을 위한 방법 및 컴퓨터 프로그램</title>
    <link>https://scholar.dgist.ac.kr/handle/20.500.11750/59074</link>
    <description>Title: 도메인 적응을 위한 방법 및 컴퓨터 프로그램
Author(s): 김창재; 최원혁; 임성훈; 이승훈; 최민우
Abstract: 본 개시의 일 실시예에 따라 컴퓨터 판독가능 저장 매체에 저장된 컴퓨터 프로그램이 개시된다. 상기 컴퓨터 프로그램은 하나 이상의 프로세서에서 실행되는 경우 도메인 변환을 위한 이하의 방법을 수행하도록 하며, 상기 방법은, 테스크 네트워크에서 제 1 이미지를 연산하고, 상기 제 1 이미지의 픽셀 별 클래스에 대한 피처 공간 상의 제 1 클래스 별 클러스터를 정의하는 단계; 상기 테스크 네트워크에서 제 2 이미지를 연산하고, 상기 제 2 이미지의 픽셀 별 클래스에 대한 피처 공간 상의 제 2 클래스 별 클러스터를 정의하는 단계; 상기 제 1 이미지의 각각의 픽셀에 대한 연산 결과와 상기 제 2 클래스 별 클러스터를 제 1 비교하는 단계; 상기 제 2 이미지의 각각의 픽셀에 대한 연산 결과와 상기 제 1 클래스 별 클러스터를 제 2 비교하는 단계; 상기 제 1 비교 결과에 기초하여 상기 제 1 이미지의 레이블의 적어도 일부를 비활성화하여 제 1 선택된 레이블을 생성하는 단계; 상기 제 2 비교 결과에 기초하여 상기 제 2 이미지의 레이블의 적어도 일부를 비활성화하여 제 2 선택된 레이블을 생성하는 단계; 및상기 제 1 선택된 레이블 및 상기 제 2 선택된 레이블에 기초하여 상기 테스크 네트워크를 학습시키는 단계를 포함할 수 있다.</description>
  </item>
</rdf:RDF>

