Cited 0 time in
Cited 1 time in
Multi-Task Deep Learning Design and Training Tool for Unified Visual Driving Scene Understanding
- Multi-Task Deep Learning Design and Training Tool for Unified Visual Driving Scene Understanding
- Won, Woong-Jae; Kim, Tae Hun; Kwon, Soon
- DGIST Authors
- Kwon, Soon
- Issue Date
- 19th International Conference on Control, Automation and Systems, ICCAS 2019, 356-360
- Visual driving scene perception systems have been gained popularity among the autonomous driving research community following the advent of deep learning technology. Moreover, the multi-task deep learning model has been an important tool with respect to unifying the tasks performed in a driving scene perception system, such as scene classification, object detection, segmentation, depth estimation. In this paper, we introduce our developed multi-task deep-learning model design and training tool, for unified road scene perception model. Additionally, we also propose a sequential auxiliary multi-task training method that can train a multi-task model, using different datasets for each tasks. Finally, we present a unified road segmentation and depth estimation model, based on multi-task deep learning, to verify the efficiency and feasibility of our developed tool. Experimental results for KITTI datasets show that our tool-based unified road segmentation and depth estimation model can successfully segment the driving road and estimate its depth. © 2019 Institute of Control, Robotics and Systems - ICROS.
- IEEE Computer Society
- Related Researcher
computer vision; deep learning; autonomous driving; parallel processing; vision system on chip
There are no files associated with this item.
- Division of Automotive Technology2. Conference Papers
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.