As deep learning-based perception techniques continue to advance, research is being conducted to apply technologies such as obstacle detection, semantic segmentation, and depth estimation to autonomous vehicles. However, while most studies show good performance in daylight conditions, there is a frequent degradation of performance in nighttime environments. To address this, a nighttime dataset is needed, but directly acquiring this data is time-consuming and difficult. Therefore, other studies have used image-to-image translation models to generate nighttime data. However, while these models can generate well-formed nighttime images, the resulting images lack a specific brightness and can suffer from noise-induced artifacts. In this study, the Y-Control Loss and Self-attention module were added to improve the existing CycleGAN model and address this problem.
Research Interests
Autonomous Vehicle and Aerial Robotic Systems and Control; Theory and Applications for Mechatronic Systems and Control; 자율 주행 및 비행 시스템 제어; 로봇공학 및 지능제어