<?xml version="1.0" encoding="UTF-8"?>
<rdf:RDF xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns="http://purl.org/rss/1.0/" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel rdf:about="https://scholar.dgist.ac.kr/handle/20.500.11750/12137">
    <title>Repository Collection: null</title>
    <link>https://scholar.dgist.ac.kr/handle/20.500.11750/12137</link>
    <description />
    <items>
      <rdf:Seq>
        <rdf:li rdf:resource="https://scholar.dgist.ac.kr/handle/20.500.11750/59362" />
        <rdf:li rdf:resource="https://scholar.dgist.ac.kr/handle/20.500.11750/58618" />
        <rdf:li rdf:resource="https://scholar.dgist.ac.kr/handle/20.500.11750/58617" />
        <rdf:li rdf:resource="https://scholar.dgist.ac.kr/handle/20.500.11750/58406" />
      </rdf:Seq>
    </items>
    <dc:date>2026-04-04T14:14:42Z</dc:date>
  </channel>
  <item rdf:about="https://scholar.dgist.ac.kr/handle/20.500.11750/59362">
    <title>Towards Lossless Implicit Neural Representation via Bit Plane Decomposition</title>
    <link>https://scholar.dgist.ac.kr/handle/20.500.11750/59362</link>
    <description>Title: Towards Lossless Implicit Neural Representation via Bit Plane Decomposition
Author(s): Han, Woo Kyoung; Lee, Byeonghun; Cho, Hyunmin; Im, Sunghoon; Jin, Kyong Hwan
Abstract: We quantify the upper bound on the size of the implicit neural representation (INR) model from a digital perspective. The upper bound of the model size increases exponentially as the required bit-precision increases. To this end, we present a bit-plane decomposition method that makes INR predict bit-planes, producing the same effect as reducing the upper bound of the model size. We validate our hypothesis that reducing the upper bound leads to faster convergence with constant model size. Our method achieves lossless representation in 2D image and audio fitting, even for high bit-depth signals, such as 16-bit, which was previously unachievable. We pioneered the presence of bit bias, which INR prioritizes as the most significant bit (MSB). We expand the application of the INR task to bit depth expansion, lossless image compression, and extreme network quantization. Our source code is available at https: //github.com/WooKyoungHan/LosslessINR.</description>
    <dc:date>2025-06-12T15:00:00Z</dc:date>
  </item>
  <item rdf:about="https://scholar.dgist.ac.kr/handle/20.500.11750/58618">
    <title>Self-supervised Monocular Depth Estimation Robust to Reflective Surface Leveraged by Triplet Mining</title>
    <link>https://scholar.dgist.ac.kr/handle/20.500.11750/58618</link>
    <description>Title: Self-supervised Monocular Depth Estimation Robust to Reflective Surface Leveraged by Triplet Mining
Author(s): Choi, Wonhyeok; Hwang, Kyumin; Peng, Wei; Choi, Minwoo; Im, Sunghoon
Abstract: Self-supervised monocular depth estimation (SSMDE) aims to predict the dense depth map of a monocular image, by learning depth from RGB image sequences, eliminating the need for ground-truth depth labels. Although this approach simplifies data acquisition compared to supervised methods, it struggles with reflective surfaces, as they violate the assumptions of Lambertian reflectance, leading to inaccurate training on such surfaces. To tackle this problem, we propose a novel training strategy for an SSMDE by leveraging triplet mining to pinpoint reflective regions at the pixel level, guided by the camera geometry between different viewpoints. The proposed reflection-aware triplet mining loss specifically penalizes the inappropriate photometric error minimization on the localized reflective regions while preserving depth accuracy in non-reflective areas. We also incorporate a reflection-aware knowledge distillation method that enables a student model to selectively learn the pixel-level knowledge from reflective and non-reflective regions. This results in robust depth estimation across areas. Evaluation results on multiple datasets demonstrate that our method effectively enhances depth quality on reflective surfaces and outperforms state-of-the-art SSMDE baselines. © 2025 13th International Conference on Learning Representations, ICLR 2025. All rights reserved.</description>
    <dc:date>2025-04-25T15:00:00Z</dc:date>
  </item>
  <item rdf:about="https://scholar.dgist.ac.kr/handle/20.500.11750/58617">
    <title>Style-Editor: Text-driven Object-centric Style Editing</title>
    <link>https://scholar.dgist.ac.kr/handle/20.500.11750/58617</link>
    <description>Title: Style-Editor: Text-driven Object-centric Style Editing
Author(s): Park, Jihun; Gim, Jongmin; Lee, Kyoungmin; Lee, Seunghun; Im, Sunghoon
Abstract: We present Text-driven object-centric style editing model named Style-Editor, a novel method that guides style editing at an object-centric level using textual inputs.The core of Style-Editor is our Patch-wise Co-Directional (PCD) loss, meticulously designed for precise object-centric editing that are closely aligned with the input text. This loss combines a patch directional loss for text-guided style direction and a patch distribution consistency loss for even CLIP embedding distribution across object regions. It ensures a seamless and harmonious style editing across object regions.Key to our method are the Text-Matched Patch Selection (TMPS) and Pre-fixed Region Selection (PRS) modules for identifying object locations via text, eliminating the need for segmentation masks. Lastly, we introduce an Adaptive Background Preservation (ABP) loss to maintain the original style and structural essence of the image’s background. This loss is applied to dynamically identified background areas.Extensive experiments underline the effectiveness of our approach in creating visually coherent and textually aligned style editing.</description>
    <dc:date>2025-06-13T15:00:00Z</dc:date>
  </item>
  <item rdf:about="https://scholar.dgist.ac.kr/handle/20.500.11750/58406">
    <title>Intrinsic Image Decomposition for Robust Self-supervised Monocular Depth Estimation on Reflective Surfaces</title>
    <link>https://scholar.dgist.ac.kr/handle/20.500.11750/58406</link>
    <description>Title: Intrinsic Image Decomposition for Robust Self-supervised Monocular Depth Estimation on Reflective Surfaces
Author(s): Choi, Wonhyeok; Hwang, Kyumin; Choi, Minwoo; Han, Kiljoon; Choi, Wonjoon; Shin, Mingyu; Im, Sunghoon
Abstract: Self-supervised monocular depth estimation (SSMDE) has gained attention in the field of deep learning as it estimates depth without requiring ground truth depth maps. This approach typically uses a photometric consistency loss between a synthesized image, generated from the estimated depth, and the original image, thereby reducing the need for extensive dataset acquisition. However, the conventional photometric consistency loss relies on the Lambertian assumption, which often leads to significant errors when dealing with reflective surfaces that deviate from this model. To address this limitation, we propose a novel framework that incorporates intrinsic image decomposition into SSMDE. Our method synergistically trains for both monocular depth estimation and intrinsic image decomposition. The accurate depth estimation facilitates multi-image consistency for intrinsic image decomposition by aligning different view coordinate systems, while the decomposition process identifies reflective areas and excludes corrupted gradients from the depth training process. Furthermore, our framework introduces a pseudo-depth generation and knowledge distillation technique to further enhance the performance of the student model across both reflective and non-reflective surfaces. Comprehensive evaluations on multiple datasets show that our approach significantly outperforms existing SSMDE baselines in depth prediction, especially on reflective surfaces.</description>
    <dc:date>2025-02-27T15:00:00Z</dc:date>
  </item>
</rdf:RDF>

