Cited time in webofscience Cited time in scopus

Full metadata record

DC Field Value Language
dc.contributor.author Wang, Sungjun -
dc.contributor.author Seo, Junghyun -
dc.contributor.author Jeon, Hyeonjae -
dc.contributor.author Lim, Sungjin -
dc.contributor.author Park, Sang Hyun -
dc.contributor.author Lim, Yongseob -
dc.date.accessioned 2024-02-02T11:40:15Z -
dc.date.available 2024-02-02T11:40:15Z -
dc.date.created 2023-10-04 -
dc.date.issued 2023-10 -
dc.identifier.issn 2377-3766 -
dc.identifier.uri http://hdl.handle.net/20.500.11750/47735 -
dc.description.abstract The emergence of convolutional neural networks (CNNs) has led to significant advancements in various computer vision tasks. Among them, stereo matching is one of the most popular research areas that enables the reconstruction of 3D information, which is difficult to obtain with only a monocular camera. However, CNNs have their limitations, particularly their susceptibility to domain shift. The CNN-based stereo matching networks suffered from performance degradation under domain changes. Moreover, obtaining a significant amount of real-world ground truth data is laborious and costly when compared to acquiring synthetic data. In this letter, we propose an end-to-end framework that utilizes image-to-image translation to overcome the domain gap in stereo matching. Specifically, we suggest a horizontal attentive generation (HAG) module that incorporates the epipolar constraints when generating target-stylized left-right views. By employing a horizontal attention mechanism during generation, our method can address the issues related to small receptive field by aggregating more information of each view without using the entire feature map. Therefore, our network can maintain consistencies between each view during image generation, making it more robust for different datasets. © 2023 IEEE. -
dc.language English -
dc.publisher Institute of Electrical and Electronics Engineers -
dc.title Horizontal Attention Based Generation Module for Unsupervised Domain Adaptive Stereo Matching -
dc.type Article -
dc.identifier.doi 10.1109/LRA.2023.3313009 -
dc.identifier.scopusid 2-s2.0-85171532861 -
dc.identifier.bibliographicCitation IEEE Robotics and Automation Letters, v.8, no.10, pp.6779 - 6786 -
dc.description.isOpenAccess FALSE -
dc.subject.keywordAuthor Deep learning for visual perception -
dc.subject.keywordAuthor computer vision for automation -
dc.citation.endPage 6786 -
dc.citation.number 10 -
dc.citation.startPage 6779 -
dc.citation.title IEEE Robotics and Automation Letters -
dc.citation.volume 8 -

qrcode

  • twitter
  • facebook
  • mendeley

Items in Repository are protected by copyright, with all rights reserved, unless otherwise indicated.

BROWSE