<?xml version="1.0" encoding="UTF-8"?>
<feed xmlns="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <title>Repository Collection: null</title>
  <link rel="alternate" href="https://scholar.dgist.ac.kr/handle/20.500.11750/13646" />
  <subtitle />
  <id>https://scholar.dgist.ac.kr/handle/20.500.11750/13646</id>
  <updated>2026-04-04T12:19:11Z</updated>
  <dc:date>2026-04-04T12:19:11Z</dc:date>
  <entry>
    <title>Efficient One-shot Federated Learning on Medical Data using Knowledge Distillation with Image Synthesis and Client Model Adaptation</title>
    <link rel="alternate" href="https://scholar.dgist.ac.kr/handle/20.500.11750/58642" />
    <author>
      <name>Kang, Myeongkyun</name>
    </author>
    <author>
      <name>Chikontwe, Philip</name>
    </author>
    <author>
      <name>Kim, Soopil</name>
    </author>
    <author>
      <name>Jin, Kyong Hwan</name>
    </author>
    <author>
      <name>Adeli, Ehsan</name>
    </author>
    <author>
      <name>Pohl. Kilian M.</name>
    </author>
    <author>
      <name>Park, Sang Hyun</name>
    </author>
    <id>https://scholar.dgist.ac.kr/handle/20.500.11750/58642</id>
    <updated>2025-07-25T03:32:01Z</updated>
    <published>2025-09-30T15:00:00Z</published>
    <summary type="text">Title: Efficient One-shot Federated Learning on Medical Data using Knowledge Distillation with Image Synthesis and Client Model Adaptation
Author(s): Kang, Myeongkyun; Chikontwe, Philip; Kim, Soopil; Jin, Kyong Hwan; Adeli, Ehsan; Pohl. Kilian M.; Park, Sang Hyun
Abstract: One-shot federated learning (FL) has emerged as a promising solution in scenarios where multiple communication rounds are not practical. Though previous methods using knowledge distillation (KD) with synthetic images have shown promising results in transferring clients’ knowledge to the global model on one-shot FL, overfitting and extensive computations still persist. To tackle these issues, we propose a novel one-shot FL framework that generates pseudo intermediate samples using mixup, which incorporates synthesized images with diverse types of structure noise. This approach (i) enhances the diversity of training samples, preventing overfitting and providing informative visual clues for effective training and (ii) allows for the reuse of synthesized images, reducing computational resources and improving overall training efficiency. To mitigate domain disparity introduced by noise, we design noise-adapted client models by updating batch normalization statistics on noise to enhance KD. With these in place, the training process involves iteratively updating the global model through KD with both the original and noise-adapted client models using pseudo-generated images. Extensive evaluations on five small-sized and three regular-sized medical image classification datasets demonstrate the superiority of our approach over previous methods.</summary>
    <dc:date>2025-09-30T15:00:00Z</dc:date>
  </entry>
  <entry>
    <title>Deep 3D reconstruction of synchrotron X-ray computed tomography for intact lungs</title>
    <link rel="alternate" href="https://scholar.dgist.ac.kr/handle/20.500.11750/46217" />
    <author>
      <name>Shin, Seungjoo</name>
    </author>
    <author>
      <name>Kim, Min Woo</name>
    </author>
    <author>
      <name>Jin, Kyong Hwan</name>
    </author>
    <author>
      <name>Yi, Kwang Moo</name>
    </author>
    <author>
      <name>Kohmura, Yoshiki</name>
    </author>
    <author>
      <name>Ishikawa, Tetsuya</name>
    </author>
    <author>
      <name>Je, Jung Ho</name>
    </author>
    <author>
      <name>Park, Jaesik</name>
    </author>
    <id>https://scholar.dgist.ac.kr/handle/20.500.11750/46217</id>
    <updated>2025-07-25T03:31:35Z</updated>
    <published>2022-12-31T15:00:00Z</published>
    <summary type="text">Title: Deep 3D reconstruction of synchrotron X-ray computed tomography for intact lungs
Author(s): Shin, Seungjoo; Kim, Min Woo; Jin, Kyong Hwan; Yi, Kwang Moo; Kohmura, Yoshiki; Ishikawa, Tetsuya; Je, Jung Ho; Park, Jaesik
Abstract: Synchrotron X-rays can be used to obtain highly detailed images of parts of the lung. However, micro-motion artifacts induced by such as cardiac motion impede quantitative visualization of the alveoli in the lungs. This paper proposes a method that applies a neural network for synchrotron X-ray Computed Tomography (CT) data to reconstruct the high-quality 3D structure of alveoli in intact mouse lungs at expiration, without needing ground-truth data. Our approach reconstructs the spatial sequence of CT images by using a deep-image prior with interpolated input latent variables, and in this way significantly enhances the images of alveolar structure compared with the prior art. The approach successfully visualizes 3D alveolar units of intact mouse lungs at expiration and enables us to measure the diameter of the alveoli. We believe that our approach helps to accurately visualize other living organs hampered by micro-motion. © 2023, The Author(s).</summary>
    <dc:date>2022-12-31T15:00:00Z</dc:date>
  </entry>
  <entry>
    <title>Time-Dependent Deep Image Prior for Dynamic MRI</title>
    <link rel="alternate" href="https://scholar.dgist.ac.kr/handle/20.500.11750/15955" />
    <author>
      <name>Yoo, Jaejun</name>
    </author>
    <author>
      <name>Jin, Kyong Hwan</name>
    </author>
    <author>
      <name>Gupta, Harshit</name>
    </author>
    <author>
      <name>Yerly, Jérôme</name>
    </author>
    <author>
      <name>Stuber, Matthias</name>
    </author>
    <author>
      <name>Unser, Michael</name>
    </author>
    <id>https://scholar.dgist.ac.kr/handle/20.500.11750/15955</id>
    <updated>2025-07-25T03:37:52Z</updated>
    <published>2021-11-30T15:00:00Z</published>
    <summary type="text">Title: Time-Dependent Deep Image Prior for Dynamic MRI
Author(s): Yoo, Jaejun; Jin, Kyong Hwan; Gupta, Harshit; Yerly, Jérôme; Stuber, Matthias; Unser, Michael
Abstract: We propose a novel unsupervised deep-learning-based algorithm for dynamic magnetic resonance imaging (MRI) reconstruction. Dynamic MRI requires rapid data acquisition for the study of moving organs such as the heart. We introduce a generalized version of the deep-image-prior approach, which optimizes the weights of a reconstruction network to fit a sequence of sparsely acquired dynamic MRI measurements. Our method needs neither prior training nor additional data. In particular, for cardiac images, it does not require the marking of heartbeats or the reordering of spokes. The key ingredients of our method are threefold: 1) a fixed low-dimensional manifold that encodes the temporal variations of images; 2) a network that maps the manifold into a more expressive latent space; and 3) a convolutional neural network that generates a dynamic series of MRI images from the latent variables and that favors their consistency with the measurements in k-space. Our method outperforms the state-of-the-art methods quantitatively and qualitatively in both retrospective and real fetal cardiac datasets. To the best of our knowledge, this is the first unsupervised deep-learning-based method that can reconstruct the continuous variation of dynamic MRI sequences with high spatial resolution. © 1982-2012 IEEE.</summary>
    <dc:date>2021-11-30T15:00:00Z</dc:date>
  </entry>
  <entry>
    <title>Deep Block Transform for Autoencoders</title>
    <link rel="alternate" href="https://scholar.dgist.ac.kr/handle/20.500.11750/15400" />
    <author>
      <name>Jin, Kyong Hwan</name>
    </author>
    <id>https://scholar.dgist.ac.kr/handle/20.500.11750/15400</id>
    <updated>2025-07-25T02:36:05Z</updated>
    <published>2021-04-30T15:00:00Z</published>
    <summary type="text">Title: Deep Block Transform for Autoencoders
Author(s): Jin, Kyong Hwan
Abstract: We discover that a trainable convolution layer with a stride over 1 and kernel ≥ stride is identical to a trainable block transform. A block transform is performed when we use a convolution layer with a stride ≥ 2 and a kernel ≥ the stride. For instance, if we use the same widths, such as a 2 × 2 convolution kernel and stride-2, there are no overlaps between sliding windows, so this layer operates a block transform on the partitioned 2 × 2 blocks. A block transform reduces the computational complexity due to a stride ≥ 2. To keep the original size, we apply a transposed convolution (stride = kernel ≥ 2), an adjoint operator of a forward block transform. Based on this relationship, we propose a trainable multi-scale block transform for autoencoders. The proposed method has an encoder consisting of two sequential convolutions with stride-2, a 2× 2 kernel, and a decoder consisting of the encoder&amp;apos;s two adjoint operators (transposed convolution). Clipping is used for nonlinear activations. Inspired by the zero-frequency element in the dictionary learning method, the proposed method uses DC values for residual learning. The proposed method shows high-resolution representations, whereas the stride-1 convolutional autoencoder with 3 × 3 kernels generates blurry images. © 1994-2012 IEEE.</summary>
    <dc:date>2021-04-30T15:00:00Z</dc:date>
  </entry>
</feed>

