Detail View

Content preserving image translation with texture co-occurrence and spatial self-similarity for texture debiasing and domain adaptation
Citations

WEB OF SCIENCE

Citations

SCOPUS

Metadata Downloads

DC Field Value Language
dc.contributor.author Kang, Myeongkyun -
dc.contributor.author Won, Dongkyu -
dc.contributor.author Luna, Acevedo Miguel Andres -
dc.contributor.author Chikontwe, Philip -
dc.contributor.author Hong, Kyung Soo -
dc.contributor.author Ahn, June Hong -
dc.contributor.author Park, Sang Hyun -
dc.date.accessioned 2023-08-28T11:10:18Z -
dc.date.available 2023-08-28T11:10:18Z -
dc.date.created 2023-08-28 -
dc.date.issued 2023-09 -
dc.identifier.issn 0893-6080 -
dc.identifier.uri http://hdl.handle.net/20.500.11750/46325 -
dc.description.abstract Models trained on datasets with texture bias usually perform poorly on out-of-distribution samples since biased representations are embedded into the model. Recently, various image translation and debiasing methods have attempted to disentangle texture biased representations for downstream tasks, but accurately discarding biased features without altering other relevant information is still challenging. In this paper, we propose a novel framework that leverages image translation to generate additional training images using the content of a source image and the texture of a target image with a different bias property to explicitly mitigate texture bias when training a model on a target task. Our model ensures texture similarity between the target and generated images via a texture co-occurrence loss while preserving content details from source images with a spatial self-similarity loss. Both the generated and original training images are combined to train improved classification or segmentation models robust to inconsistent texture bias. Evaluation on five classification- and two segmentation-datasets with known texture biases demonstrates the utility of our method, and reports significant improvements over recent state-of-the-art methods in all cases. © 2023 Elsevier Ltd -
dc.language English -
dc.publisher Elsevier -
dc.title Content preserving image translation with texture co-occurrence and spatial self-similarity for texture debiasing and domain adaptation -
dc.type Article -
dc.identifier.doi 10.1016/j.neunet.2023.07.049 -
dc.identifier.scopusid 2-s2.0-85168422323 -
dc.identifier.bibliographicCitation Kang, Myeongkyun. (2023-09). Content preserving image translation with texture co-occurrence and spatial self-similarity for texture debiasing and domain adaptation. Neural Networks, 166, 722–737. doi: 10.1016/j.neunet.2023.07.049 -
dc.description.isOpenAccess FALSE -
dc.subject.keywordAuthor Debiasing -
dc.subject.keywordAuthor Self-similarity -
dc.subject.keywordAuthor Texture co-occurrence -
dc.subject.keywordAuthor Unsupervised domain adaptation -
dc.subject.keywordAuthor Unpaired image translation -
dc.citation.endPage 737 -
dc.citation.startPage 722 -
dc.citation.title Neural Networks -
dc.citation.volume 166 -
Show Simple Item Record

File Downloads

  • There are no files associated with this item.

공유

qrcode
공유하기

Related Researcher

박상현
Park, Sang Hyun박상현

Department of Robotics and Mechatronics Engineering

read more

Total Views & Downloads