Cited time in webofscience Cited time in scopus

A Large-Scale Virtual Dataset and Egocentric Localization for Disaster Responses

Title
A Large-Scale Virtual Dataset and Egocentric Localization for Disaster Responses
Author(s)
Jeon, Hae-GonIm, SunghoonLee, Byeong-UkRameau, Franc¸oisChoi, Dong-GeolOh, JeanKweon, In SoHebert, Martial
Issued Date
2023-06
Citation
IEEE Transactions on Pattern Analysis and Machine Intelligence, v.45, no.6, pp.6766 - 6782
Type
Article
Author Keywords
egocentric localizationvisual odometrycamera relocalizationLarge-scale datasetdisaster scenarios
Keywords
CamerasComputer visionConvolutional neural networksDisastersLarge datasetOptical flowsSemanticsTexturesDisaster responseDisaster scenarioDisaster situationsEgocentric localizationGround truth dataHigh resolution stereoState-of-the-art methodsVisual observationsStereo image processing
ISSN
0162-8828
Abstract
With the increasing social demands of disaster response, methods of visual observation for rescue and safety have become increasingly important. However, because of the shortage of datasets for disaster scenarios, there has been little progress in computer vision and robotics in this field. With this in mind, we present the first large-scale synthetic dataset of egocentric viewpoints for disaster scenarios. We simulate pre- and post-disaster cases with drastic changes in appearance, such as buildings on fire and earthquakes. The dataset consists of more than 300K high-resolution stereo image pairs, all annotated with ground-truth data for the semantic label, depth in metric scale, optical flow with sub-pixel precision, and surface normal as well as their corresponding camera poses. To create realistic disaster scenes, we manually augment the effects with 3D models using physically-based graphics tools. We train various state-of-the-art methods to perform computer vision tasks using our dataset, evaluate how well these methods recognize the disaster situations, and produce reliable results of virtual scenes as well as real-world images. We also present a convolutional neural network-based egocentric localization method that is robust to drastic appearance changes, such as the texture changes in a fire, and layout changes from a collapse. To address these key challenges, we propose a new model that learns a shape-based representation by training on stylized images, and incorporate the dominant planes of query images as approximate scene coordinates. We evaluate the proposed method using various scenes including a simulated disaster dataset to demonstrate the effectiveness of our method when confronted with significant changes in scene layout. Experimental results show that our method provides reliable camera pose predictions despite vastly changed conditions.
URI
http://hdl.handle.net/20.500.11750/13992
DOI
10.1109/TPAMI.2021.3094531
Publisher
IEEE Computer Society
Related Researcher
  • 임성훈 Im, Sunghoon
  • Research Interests Computer Vision; Deep Learning; Robot Vision
Files in This Item:

There are no files associated with this item.

Appears in Collections:
Department of Electrical Engineering and Computer Science Computer Vision Lab. 1. Journal Articles

qrcode

  • twitter
  • facebook
  • mendeley

Items in Repository are protected by copyright, with all rights reserved, unless otherwise indicated.

BROWSE