Detail View

DC Field Value Language
dc.contributor.author Kang, Myeongkyun -
dc.contributor.author Chikontwe, Philip -
dc.contributor.author Kim, Soopil -
dc.contributor.author Jin, Kyong Hwan -
dc.contributor.author Adeli, Ehsan -
dc.contributor.author Pohl. Kilian M. -
dc.contributor.author Park, Sang Hyun -
dc.date.accessioned 2025-07-15T15:10:09Z -
dc.date.available 2025-07-15T15:10:09Z -
dc.date.created 2025-07-06 -
dc.date.issued 2025-10 -
dc.identifier.issn 1361-8415 -
dc.identifier.uri https://scholar.dgist.ac.kr/handle/20.500.11750/58642 -
dc.description.abstract One-shot federated learning (FL) has emerged as a promising solution in scenarios where multiple communication rounds are not practical. Though previous methods using knowledge distillation (KD) with synthetic images have shown promising results in transferring clients’ knowledge to the global model on one-shot FL, overfitting and extensive computations still persist. To tackle these issues, we propose a novel one-shot FL framework that generates pseudo intermediate samples using mixup, which incorporates synthesized images with diverse types of structure noise. This approach (i) enhances the diversity of training samples, preventing overfitting and providing informative visual clues for effective training and (ii) allows for the reuse of synthesized images, reducing computational resources and improving overall training efficiency. To mitigate domain disparity introduced by noise, we design noise-adapted client models by updating batch normalization statistics on noise to enhance KD. With these in place, the training process involves iteratively updating the global model through KD with both the original and noise-adapted client models using pseudo-generated images. Extensive evaluations on five small-sized and three regular-sized medical image classification datasets demonstrate the superiority of our approach over previous methods. -
dc.language English -
dc.publisher Elsevier -
dc.title Efficient One-shot Federated Learning on Medical Data using Knowledge Distillation with Image Synthesis and Client Model Adaptation -
dc.type Article -
dc.identifier.doi 10.1016/j.media.2025.103714 -
dc.identifier.scopusid 2-s2.0-105010700409 -
dc.identifier.bibliographicCitation Kang, Myeongkyun. (2025-10). Efficient One-shot Federated Learning on Medical Data using Knowledge Distillation with Image Synthesis and Client Model Adaptation. Medical Image Analysis, 105. doi: 10.1016/j.media.2025.103714 -
dc.description.isOpenAccess FALSE -
dc.subject.keywordAuthor Client model Adaptation -
dc.subject.keywordAuthor Image Synthesis -
dc.subject.keywordAuthor Knowledge Distillation -
dc.subject.keywordAuthor Noise -
dc.subject.keywordAuthor One-Shot Federated Learning -
dc.citation.title Medical Image Analysis -
dc.citation.volume 105 -
dc.description.journalRegisteredClass scie -
dc.description.journalRegisteredClass scopus -
dc.type.docType Article -
Show Simple Item Record

File Downloads

  • There are no files associated with this item.

공유

qrcode
공유하기

Related Researcher

김수필
Kim, Soopil김수필

Division of Intelligent Robotics

read more

Total Views & Downloads