Cited 0 time in webofscience Cited 0 time in scopus

Deep Depth from Uncalibrated Small Motion Clip

Title
Deep Depth from Uncalibrated Small Motion Clip
Authors
Im, SunghoonHa, HyowonJeon, Hae-GonLin, StephenKweon, In So
DGIST Authors
Im, Sunghoon; Ha, Hyowon; Jeon, Hae-Gon; Lin, Stephen; Kweon, In So
Issue Date
2021-04
Citation
IEEE Transactions on Pattern Analysis and Machine Intelligence, 43(4), 1225-1238
Type
Article
Article Type
Article
Author Keywords
CamerasBundle adjustmentGeometryImage reconstructionEstimationCalibration3D reconstructiongeometrydeep learningstructure from motionbundle adjustmentplane sweeping algorithm
Keywords
GeometryImage reconstructionLearning algorithmsNeural networksObject recognitionStereo image processing3D reconstructionBundle adjustmentsCamera pose estimationConvolutional neural networkDepth reconstructionLearning frameworksStructure from motionDeep learningPlane sweepingCalibrationCamerasEstimation
ISSN
0162-8828
Abstract
We propose a novel approach to infer a high-quality depth map from a set of images with small viewpoint variations. In general, techniques for depth estimation from small motion consist of camera pose estimation and dense reconstruction. In contrast to prior approaches that recover scene geometry and camera motions using pre-calibrated cameras, we introduce a self-calibrating bundle adjustment method tailored for small motion which enables computation of camera poses without the need for camera calibration. For dense depth reconstruction, we present a convolutional neural network called DPSNet (Deep Plane Sweep Network) whose design is inspired by best practices of traditional geometry-based approaches. Rather than directly estimating depth or optical flow correspondence from image pairs as done in many previous deep learning methods, DPSNet takes a plane sweep approach that involves building a cost volume from deep features using the plane sweep algorithm, regularizing the cost volume, and regressing the depth map from the cost volume. The cost volume is constructed using a differentiable warping process that allows for end-to-end training of the network. Through the effective incorporation of conventional multiview stereo concepts within a deep learning framework, the proposed method achieves state-of-the-art results on a variety of challenging datasets. IEEE
URI
http://hdl.handle.net/20.500.11750/10972
DOI
10.1109/TPAMI.2019.2946806
Publisher
Institute of Electrical and Electronics Engineers
Related Researcher
  • Author Im, Sunghoon Computer Vision Lab.
  • Research Interests Computer Vision; Deep Learning; Robot Vision
Files:
There are no files associated with this item.
Collection:
Department of Information and Communication EngineeringComputer Vision Lab.1. Journal Articles


qrcode mendeley

Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.

BROWSE