<?xml version="1.0" encoding="UTF-8"?>
<feed xmlns="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <title>Repository Collection: null</title>
  <link rel="alternate" href="https://scholar.dgist.ac.kr/handle/20.500.11750/13647" />
  <subtitle />
  <id>https://scholar.dgist.ac.kr/handle/20.500.11750/13647</id>
  <updated>2026-04-04T11:25:12Z</updated>
  <dc:date>2026-04-04T11:25:12Z</dc:date>
  <entry>
    <title>ABCD : Arbitrary Bitwise Coefficient for De-quantization</title>
    <link rel="alternate" href="https://scholar.dgist.ac.kr/handle/20.500.11750/47924" />
    <author>
      <name>Han, Woo Kyoung</name>
    </author>
    <author>
      <name>Lee, Byeonghun</name>
    </author>
    <author>
      <name>Park, Sang Hyun</name>
    </author>
    <author>
      <name>Jin, Kyong Hwan</name>
    </author>
    <id>https://scholar.dgist.ac.kr/handle/20.500.11750/47924</id>
    <updated>2026-01-14T07:10:17Z</updated>
    <published>2023-06-19T15:00:00Z</published>
    <summary type="text">Title: ABCD : Arbitrary Bitwise Coefficient for De-quantization
Author(s): Han, Woo Kyoung; Lee, Byeonghun; Park, Sang Hyun; Jin, Kyong Hwan
Abstract: Modern displays and contents support more than 8bits image and video. However, bit-starving situations such as compression codecs make low bit-depth (LBD) images (&lt;8bits), occurring banding and blurry artifacts. Previous bit depth expansion (BDE) methods still produce unsatisfactory high bit-depth (HBD) images. To this end, we propose an implicit neural function with a bit query to recover de-quantized images from arbitrarily quantized inputs. We develop a phasor estimator to exploit the information of the nearest pixels. Our method shows superior performance against prior BDE methods on natural and animation images. We also demonstrate our model on YouTube UGC datasets for de-banding. Our source code is available at https://github.com/WooKyoungHan/ABCD</summary>
    <dc:date>2023-06-19T15:00:00Z</dc:date>
  </entry>
  <entry>
    <title>One-Shot Federated Learning on Medical Data Using Knowledge Distillation with Image Synthesis and Client Model Adaptation</title>
    <link rel="alternate" href="https://scholar.dgist.ac.kr/handle/20.500.11750/47780" />
    <author>
      <name>Kang, Myeongkyun</name>
    </author>
    <author>
      <name>Chikontwe, Philip</name>
    </author>
    <author>
      <name>Kim, Soopil</name>
    </author>
    <author>
      <name>Jin, Kyong Hwan</name>
    </author>
    <author>
      <name>Adeli, Ehsan</name>
    </author>
    <author>
      <name>Pohl, Kilian M.</name>
    </author>
    <author>
      <name>Park, Sang Hyun</name>
    </author>
    <id>https://scholar.dgist.ac.kr/handle/20.500.11750/47780</id>
    <updated>2025-07-25T02:47:45Z</updated>
    <published>2023-10-09T15:00:00Z</published>
    <summary type="text">Title: One-Shot Federated Learning on Medical Data Using Knowledge Distillation with Image Synthesis and Client Model Adaptation
Author(s): Kang, Myeongkyun; Chikontwe, Philip; Kim, Soopil; Jin, Kyong Hwan; Adeli, Ehsan; Pohl, Kilian M.; Park, Sang Hyun
Abstract: One-shot federated learning (FL) has emerged as a promising solution in scenarios where multiple communication rounds are not practical. Notably, as feature distributions in medical data are less discriminative than those of natural images, robust global model training with FL is non-trivial and can lead to overfitting. To address this issue, we propose a novel one-shot FL framework leveraging Image Synthesis and Client model Adaptation (FedISCA) with knowledge distillation (KD). To prevent overfitting, we generate diverse synthetic images ranging from random noise to realistic images. This approach (i) alleviates data privacy concerns and (ii) facilitates robust global model training using KD with decentralized client models. To mitigate domain disparity in the early stages of synthesis, we design noise-adapted client models where batch normalization statistics on random noise (synthetic images) are updated to enhance KD. Lastly, the global model is trained with both the original and noise-adapted client models via KD and synthetic images. This process is repeated till global model convergence. Extensive evaluation of this design on five small- and three large-scale medical image classification datasets reveals superior accuracy over prior methods. Code is available at https://github.com/myeongkyunkang/FedISCA. © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023.</summary>
    <dc:date>2023-10-09T15:00:00Z</dc:date>
  </entry>
  <entry>
    <title>Local Texture Estimator for Implicit Representation Function</title>
    <link rel="alternate" href="https://scholar.dgist.ac.kr/handle/20.500.11750/46830" />
    <author>
      <name>Lee, Jaewon</name>
    </author>
    <author>
      <name>Jin, Kyong Hwan</name>
    </author>
    <id>https://scholar.dgist.ac.kr/handle/20.500.11750/46830</id>
    <updated>2025-07-25T02:47:49Z</updated>
    <published>2022-06-20T15:00:00Z</published>
    <summary type="text">Title: Local Texture Estimator for Implicit Representation Function
Author(s): Lee, Jaewon; Jin, Kyong Hwan
Abstract: Recent works with an implicit neural function shed light on representing images in arbitrary resolution. However, a standalone multi-layer perceptron shows limited performance in learning high-frequency components. In this paper, we propose a Local Texture Estimator (LTE), a dominant-frequency estimator for natural images, enabling an implicit function to capture fine details while reconstructing images in a continuous manner. When jointly trained with a deep super-resolution (SR) architecture, LTE is capable of characterizing image textures in 2D Fourier space. We show that an LTE-based neuralfunction achieves favorable performance against existing deep SR methods within an arbitrary-scale factor. Furthermore, we demonstrate that our implementation takes the shortest running time compared to previous works. © 2022 IEEE.</summary>
    <dc:date>2022-06-20T15:00:00Z</dc:date>
  </entry>
  <entry>
    <title>Learning Local Implicit Fourier Representation for Image Warping</title>
    <link rel="alternate" href="https://scholar.dgist.ac.kr/handle/20.500.11750/46790" />
    <author>
      <name>Lee, Jaewon</name>
    </author>
    <author>
      <name>Choi, Kwang Pyo</name>
    </author>
    <author>
      <name>Jin, Kyong Hwan</name>
    </author>
    <id>https://scholar.dgist.ac.kr/handle/20.500.11750/46790</id>
    <updated>2025-07-25T04:07:17Z</updated>
    <published>2022-10-25T15:00:00Z</published>
    <summary type="text">Title: Learning Local Implicit Fourier Representation for Image Warping
Author(s): Lee, Jaewon; Choi, Kwang Pyo; Jin, Kyong Hwan
Abstract: Image warping aims to reshape images defined on rectangular grids into arbitrary shapes. Recently, implicit neural functions have shown remarkable performances in representing images in a continuous manner. However, a standalone multi-layer perceptron suffers from learning high-frequency Fourier coefficients. In this paper, we propose a local texture estimator for image warping (LTEW) followed by an implicit neural representation to deform images into continuous shapes. Local textures estimated from a deep super-resolution (SR) backbone are multiplied by locally-varying Jacobian matrices of a coordinate transformation to predict Fourier responses of a warped image. Our LTEW-based neural function outperforms existing warping methods for asymmetric-scale SR and homography transform. Furthermore, our algorithm well generalizes arbitrary coordinate transformations, such as homography transform with a large magnification factor and equirectangular projection (ERP) perspective transform, which are not provided in training. Our source code is available at https://github.com/jaewon-lee-b/ltew. © 2022, The Author(s), under exclusive license to Springer Nature Switzerland AG.</summary>
    <dc:date>2022-10-25T15:00:00Z</dc:date>
  </entry>
</feed>

