Communities & Collections
Researchers & Labs
Titles
DGIST
LIBRARY
DGIST R&D
Detail View
Department of Robotics and Mechatronics Engineering
Medical Image & Signal Processing Lab
1. Journal Articles
Content preserving image translation with texture co-occurrence and spatial self-similarity for texture debiasing and domain adaptation
Kang, Myeongkyun
;
Won, Dongkyu
;
Luna, Acevedo Miguel Andres
;
Chikontwe, Philip
;
Hong, Kyung Soo
;
Ahn, June Hong
;
Park, Sang Hyun
Department of Robotics and Mechatronics Engineering
Medical Image & Signal Processing Lab
1. Journal Articles
Citations
WEB OF SCIENCE
Citations
SCOPUS
Metadata Downloads
XML
Excel
Title
Content preserving image translation with texture co-occurrence and spatial self-similarity for texture debiasing and domain adaptation
Issued Date
2023-09
Citation
Kang, Myeongkyun. (2023-09). Content preserving image translation with texture co-occurrence and spatial self-similarity for texture debiasing and domain adaptation. Neural Networks, 166, 722–737. doi: 10.1016/j.neunet.2023.07.049
Type
Article
Author Keywords
Debiasing
;
Self-similarity
;
Texture co-occurrence
;
Unsupervised domain adaptation
;
Unpaired image translation
ISSN
0893-6080
Abstract
Models trained on datasets with texture bias usually perform poorly on out-of-distribution samples since biased representations are embedded into the model. Recently, various image translation and debiasing methods have attempted to disentangle texture biased representations for downstream tasks, but accurately discarding biased features without altering other relevant information is still challenging. In this paper, we propose a novel framework that leverages image translation to generate additional training images using the content of a source image and the texture of a target image with a different bias property to explicitly mitigate texture bias when training a model on a target task. Our model ensures texture similarity between the target and generated images via a texture co-occurrence loss while preserving content details from source images with a spatial self-similarity loss. Both the generated and original training images are combined to train improved classification or segmentation models robust to inconsistent texture bias. Evaluation on five classification- and two segmentation-datasets with known texture biases demonstrates the utility of our method, and reports significant improvements over recent state-of-the-art methods in all cases. © 2023 Elsevier Ltd
URI
http://hdl.handle.net/20.500.11750/46325
DOI
10.1016/j.neunet.2023.07.049
Publisher
Elsevier
Show Full Item Record
File Downloads
There are no files associated with this item.
공유
공유하기
Related Researcher
Park, Sang Hyun
박상현
Department of Robotics and Mechatronics Engineering
read more
Total Views & Downloads