Communities & Collections
Researchers & Labs
Titles
DGIST
LIBRARY
DGIST R&D
Detail View
Division of Intelligent Robot
Camera Culture Group
1. Journal Articles
Video domain adaptation for semantic segmentation using perceptual consistency matching
Ullah, Ihsan
;
An, Sion
;
Kang, Myeongkyun
;
Chikontwe, Philip
;
Lee, HyunKi
;
Choi, Jinwoo
;
Park, Sang Hyun
Division of Intelligent Robot
1. Journal Articles
Department of Robotics and Mechatronics Engineering
Medical Image & Signal Processing Lab
1. Journal Articles
Division of Intelligent Robot
Camera Culture Group
1. Journal Articles
Citations
WEB OF SCIENCE
Citations
SCOPUS
Metadata Downloads
XML
Excel
Title
Video domain adaptation for semantic segmentation using perceptual consistency matching
Issued Date
2024-11
Citation
Ullah, Ihsan. (2024-11). Video domain adaptation for semantic segmentation using perceptual consistency matching. Neural Networks, 179. doi: 10.1016/j.neunet.2024.106505
Type
Article
Author Keywords
Unsupervised domain adaptation
;
Video domain adaptation
;
Semantic segmentation
;
Consistency matching
ISSN
0893-6080
Abstract
Unsupervised domain adaptation (UDA) aims to transfer knowledge in previous and related labeled datasets (sources) to a new unlabeled dataset (target). Despite the impressive performance, existing approaches have largely focused on image-based UDA only, and video-based UDA has been relatively understudied and received less attention due to the difficulty of adapting diverse modal video features and modeling temporal associations efficiently. To address this, existing studies use optical flow to capture motion cues between in-domain consecutive frames, but is limited by heavy compute requirements and modeling flow patterns across diverse domains is equally challenging. In this work, we propose an adversarial domain adaptation approach for video semantic segmentation that aims to align temporally associated pixels in successive source and target domain frames without relying on optical flow. Specifically, we introduce a Perceptual Consistency Matching (PCM) strategy that leverages perceptual similarity to identify pixels with high correlation across consecutive frames, and infer that such pixels should correspond to the same class. Therefore, we can enhance prediction accuracy for video-UDA by enforcing consistency not only between in-domain frames, but across domains using PCM objectives during model training. Extensive experiments on public datasets show the benefit of our approach over existing state-of-the-art UDA methods. Our approach not only addresses a crucial task in video domain adaptation but also offers notable improvements in performance with faster inference times. © 2024 Elsevier Ltd
URI
http://hdl.handle.net/20.500.11750/57336
DOI
10.1016/j.neunet.2024.106505
Publisher
Elsevier
Show Full Item Record
File Downloads
There are no files associated with this item.
공유
공유하기
Related Researcher
Lee, HyunKi
이현기
Division of Intelligent Robotics
read more
Total Views & Downloads