Communities & Collections
Researchers & Labs
Titles
DGIST
LIBRARY
DGIST R&D
Detail View
Division of Mobility Technology
1. Journal Articles
LiDAR and Camera Fusion Approach for Object Distance Estimation in Self-Driving Vehicles
Kumar, Gurumadaiah Ajay
;
Lee, Jin Hee
;
Hwang, Jongrak
;
Park, Jaehyeong
;
Yoon, Sung Hoon
;
Kwon, Soon
Division of Mobility Technology
1. Journal Articles
Citations
WEB OF SCIENCE
Citations
SCOPUS
Metadata Downloads
XML
Excel
Title
LiDAR and Camera Fusion Approach for Object Distance Estimation in Self-Driving Vehicles
DGIST Authors
Kumar, Gurumadaiah Ajay
;
Lee, Jin Hee
;
Hwang, Jongrak
;
Park, Jaehyeong
;
Yoon, Sung Hoon
;
Kwon, Soon
Issued Date
2020-02
Citation
Kumar, Gurumadaiah Ajay. (2020-02). LiDAR and Camera Fusion Approach for Object Distance Estimation in Self-Driving Vehicles. doi: 10.3390/sym12020324
Type
Article
Article Type
Article
Author Keywords
computational geometry transformation
;
projection
;
sensor fusion
;
self-driving vehicle
;
sensor calibration
;
depth sensing
;
point cloud to image mapping
;
autonomous vehicle
Keywords
EXTRINSIC CALIBRATION
;
3D LIDAR
;
REGISTRATION
;
SYSTEM
ISSN
2073-8994
Abstract
The fusion of light detection and ranging (LiDAR) and camera data in real-time is known to be a crucial process in many applications, such as in autonomous driving, industrial automation, and robotics. Especially in the case of autonomous vehicles, the efficient fusion of data from these two types of sensors is important to enabling the depth of objects as well as the detection of objects at short and long distances. As both the sensors are capable of capturing the different attributes of the environment simultaneously, the integration of those attributes with an efficient fusion approach greatly benefits the reliable and consistent perception of the environment. This paper presents a method to estimate the distance (depth) between a self-driving car and other vehicles, objects, and signboards on its path using the accurate fusion approach. Based on the geometrical transformation and projection, low-level sensor fusion was performed between a camera and LiDAR using a 3D marker. Further, the fusion information is utilized to estimate the distance of objects detected by the RefineDet detector. Finally, the accuracy and performance of the sensor fusion and distance estimation approach were evaluated in terms of quantitative and qualitative analysis by considering real road and simulation environment scenarios. Thus the proposed lowlevel sensor fusion, based on the computational geometric transformation and projection for object distance estimation proves to be a promising solution for enabling reliable and consistent environment perception ability for autonomous vehicles. © 2020 by the authors.
URI
http://hdl.handle.net/20.500.11750/11663
DOI
10.3390/sym12020324
Publisher
MDPI AG
Show Full Item Record
File Downloads
2-s2.0-85080856361.pdf
공유
공유하기
Related Researcher
Kwon, Soon
권순
Division of Mobility Technology
read more
Total Views & Downloads