Cited 0 time in webofscience Cited 1 time in scopus

Multi-Task Deep Learning Design and Training Tool for Unified Visual Driving Scene Understanding

Title
Multi-Task Deep Learning Design and Training Tool for Unified Visual Driving Scene Understanding
Authors
Won, Woong-JaeKim, Tae HunKwon, Soon
DGIST Authors
Kwon, Soon
Issue Date
2019-10-16
Citation
19th International Conference on Control, Automation and Systems, ICCAS 2019, 356-360
Type
Conference
ISBN
9788993215182
ISSN
1598-7833
Abstract
Visual driving scene perception systems have been gained popularity among the autonomous driving research community following the advent of deep learning technology. Moreover, the multi-task deep learning model has been an important tool with respect to unifying the tasks performed in a driving scene perception system, such as scene classification, object detection, segmentation, depth estimation. In this paper, we introduce our developed multi-task deep-learning model design and training tool, for unified road scene perception model. Additionally, we also propose a sequential auxiliary multi-task training method that can train a multi-task model, using different datasets for each tasks. Finally, we present a unified road segmentation and depth estimation model, based on multi-task deep learning, to verify the efficiency and feasibility of our developed tool. Experimental results for KITTI datasets show that our tool-based unified road segmentation and depth estimation model can successfully segment the driving road and estimate its depth. © 2019 Institute of Control, Robotics and Systems - ICROS.
URI
http://hdl.handle.net/20.500.11750/11496
DOI
10.23919/ICCAS47443.2019.8971526
Publisher
IEEE Computer Society
Related Researcher
  • Author Kwon, Soon  
  • Research Interests computer vision; deep learning; autonomous driving; parallel processing; vision system on chip
Files:
There are no files associated with this item.
Collection:
Division of Automotive Technology2. Conference Papers


qrcode mendeley

Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.

BROWSE