Detail View

VolumeFusion: Deep Depth Fusion for 3D Scene Reconstruction
Citations

WEB OF SCIENCE

Citations

SCOPUS

Metadata Downloads

DC Field Value Language
dc.contributor.author Choe, Jaesung -
dc.contributor.author Im, Sunghoon -
dc.contributor.author Rameau, Francois -
dc.contributor.author Kang, Minjun -
dc.contributor.author Kweon, In So -
dc.date.accessioned 2023-12-26T18:43:18Z -
dc.date.available 2023-12-26T18:43:18Z -
dc.date.created 2022-12-30 -
dc.date.issued 2021-10-13 -
dc.identifier.isbn 9781665428125 -
dc.identifier.issn 2380-7504 -
dc.identifier.uri http://hdl.handle.net/20.500.11750/46896 -
dc.description.abstract To reconstruct a 3D scene from a set of calibrated views, traditional multi-view stereo techniques rely on two distinct stages: local depth maps computation and global depth maps fusion. Recent studies concentrate on deep neural architectures for depth estimation by using conventional depth fusion method or direct 3D reconstruction network by regressing Truncated Signed Distance Function (TSDF). In this paper, we advocate that replicating the traditional two stages framework with deep neural networks improves both the interpretability and the accuracy of the results. As mentioned, our network operates in two steps: 1) the local computation of the local depth maps with a deep MVS technique, and, 2) the depth maps and images' features fusion to build a single TSDF volume. In order to improve the matching performance between images acquired from very different viewpoints (e.g., large-baseline and rotations), we introduce a rotation-invariant 3D convolution kernel called PosedConv. The effectiveness of the proposed architecture is underlined via a large series of experiments conducted on the ScanNet dataset where our approach compares favorably against both traditional and deep learning techniques. © 2021 IEEE -
dc.language English -
dc.publisher IEEE Computer Society and the Computer Vision Foundation (CVF) -
dc.relation.ispartof Proceedings of the IEEE International Conference on Computer Vision -
dc.title VolumeFusion: Deep Depth Fusion for 3D Scene Reconstruction -
dc.type Conference Paper -
dc.identifier.doi 10.1109/ICCV48922.2021.01578 -
dc.identifier.wosid 000798743206025 -
dc.identifier.scopusid 2-s2.0-85121119056 -
dc.identifier.bibliographicCitation Choe, Jaesung. (2021-10-13). VolumeFusion: Deep Depth Fusion for 3D Scene Reconstruction. IEEE International Conference on Computer Vision, 16066–16075. doi: 10.1109/ICCV48922.2021.01578 -
dc.identifier.url https://iccv2021.thecvf.com/presentation-schedule -
dc.citation.conferenceDate 2021-10-11 -
dc.citation.conferencePlace CN -
dc.citation.conferencePlace Montreal -
dc.citation.endPage 16075 -
dc.citation.startPage 16066 -
dc.citation.title IEEE International Conference on Computer Vision -
Show Simple Item Record

File Downloads

  • There are no files associated with this item.

공유

qrcode
공유하기

Related Researcher

임성훈
Im, Sunghoon임성훈

Department of Electrical Engineering and Computer Science

read more

Total Views & Downloads