<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:dc="http://purl.org/dc/elements/1.1/" version="2.0">
  <channel>
    <title>Repository Collection: null</title>
    <link>https://scholar.dgist.ac.kr/handle/20.500.11750/15720</link>
    <description />
    <pubDate>Sat, 04 Apr 2026 13:36:35 GMT</pubDate>
    <dc:date>2026-04-04T13:36:35Z</dc:date>
    <item>
      <title>RainSD: Rain style diversification module for image synthesis enhancement using feature-level style distribution</title>
      <link>https://scholar.dgist.ac.kr/handle/20.500.11750/58135</link>
      <description>Title: RainSD: Rain style diversification module for image synthesis enhancement using feature-level style distribution
Author(s): Jeon, Hyeonjae; Seo, Junghyun; Kim, Taesoo; Son, Sungho; Lee, Jungki; Choi, Gyeungho; Lim, Yongseob
Abstract: Autonomous driving technology nowadays targets to level 4 or beyond, but the researchers are faced with some limitations for developing reliable driving algorithms in diverse challenges. To promote the spread of autonomous vehicles widely, it is important to address safety issues in this technology. Among various safety concerns, the sensor blockage problem by severe weather conditions can be one of the most frequent threats for multi-task learning-based perception algorithms during autonomous driving. To handle this problem, the importance of generating proper datasets is becoming more significant. In this paper, a synthetic road dataset with sensor blockage generated from real road dataset BDD100K is suggested in the format of BDD100K annotation. Rain streaks for each frame were made using an experimentally established equation and translated utilizing the image-to-image translation network based on style transfer. Using this dataset, the degradation of the diverse multitask networks for autonomous driving, such as lane detection, driving area segmentation, and traffic object detection, has been thoroughly evaluated and analyzed. The tendency of performance degradation of deep neural network-based perception systems for autonomous vehicles has been analyzed in depth. Finally, we discuss the limitation and future directions of deep neural network-based perception algorithms and autonomous driving dataset generation based on image-to-image translation. © 2025</description>
      <pubDate>Mon, 31 Mar 2025 15:00:00 GMT</pubDate>
      <guid isPermaLink="false">https://scholar.dgist.ac.kr/handle/20.500.11750/58135</guid>
      <dc:date>2025-03-31T15:00:00Z</dc:date>
    </item>
    <item>
      <title>CARLA 시뮬레이터 기반 합성 평가 데이터셋을 활용한 극한 폭우 상황에서의 심층 신경망을 이용한 차선 인식 성능 평가</title>
      <link>https://scholar.dgist.ac.kr/handle/20.500.11750/57801</link>
      <description>Title: CARLA 시뮬레이터 기반 합성 평가 데이터셋을 활용한 극한 폭우 상황에서의 심층 신경망을 이용한 차선 인식 성능 평가
Author(s): 전현재; 박성정; 손성호; 이정기; 안진웅; 최경호; 임용섭
Abstract: Autonomous driving technology nowadays targets to level 4 or beyond, but the researchers are faced with  some limitations for developing reliable driving algorithms in diverse challenges. To promote the autonomous  vehicles to spread widely, it is important to properly deal with the safety issues on this technology. Among  various safety concerns, the sensor blockage problem by severe weather conditions can be one of the most  frequent  threats  for  lane  de-tection  algorithms  during  autonomous  driving.  To  handle  this  problem,  the  importance of the generation of proper datasets is becoming more significant. In this paper, a synthetic lane  dataset with sensor blockage is suggested in the format of lane detection evaluation. Rain streaks for each  frame were made by an experimentally established equation. Using this dataset, the degradation of the diverse  lane detection methods has been verified. The trend of the per-formance degradation of deep neural network-  based  lane  detection  methods  has  been  analyzed  in  depth.  Finally,  the  limitation  and  the  future  directions  of  the  network-based  methods  were  presented.</description>
      <pubDate>Sat, 30 Nov 2024 15:00:00 GMT</pubDate>
      <guid isPermaLink="false">https://scholar.dgist.ac.kr/handle/20.500.11750/57801</guid>
      <dc:date>2024-11-30T15:00:00Z</dc:date>
    </item>
    <item>
      <title>MPC-Based Exponential Weight Laguerre Function With Non-Singular Terminal SMC for Four-Wheel Independent Drive Electric Vehicles</title>
      <link>https://scholar.dgist.ac.kr/handle/20.500.11750/57244</link>
      <description>Title: MPC-Based Exponential Weight Laguerre Function With Non-Singular Terminal SMC for Four-Wheel Independent Drive Electric Vehicles
Author(s): Sadiq, Bilal; Lim, Sungjin; Jin, Yongsik; Choi, Gyeungho; Lim, Yongseob
Abstract: This article describes a complete control method that uses Laguerre exponentially weighted model predictive control (LEMPC) to help four-wheel independent drive electric vehicles stay stable and follow their paths. The proposed method incorporates an enhanced direct yaw moment control using a robust non-singular terminal sliding mode control framework. We evaluated traditional, Laguerre, and exponentially weighted model predictive control methodologies (TMPC, LMPC, and LEMPC), respectively, with comparisons of reduced computational load and complexity while maintaining path tracking. The weighted Laguerre model predictive control exhibits improved robustness and reduced computational time and load. The suggested strong non-singular terminal sliding mode control (NTSMC) combined with LEMPC improved control and stability in a wide range of maneuvering situations and levels of uncertainty. The synergistic impact of NTMSC with LEMPC was examined to improve path tracking efficacy and dynamic stability under diverse road conditions and disturbances. The effectiveness of the control strategy in handling and stability of vehicle at high speed while maintaining efficient path tracking was validated by simulation conducted in MATLAB/Simulink along with high-fidelity co-Simulink Carsim environment.  © IEEE.</description>
      <pubDate>Thu, 31 Oct 2024 15:00:00 GMT</pubDate>
      <guid isPermaLink="false">https://scholar.dgist.ac.kr/handle/20.500.11750/57244</guid>
      <dc:date>2024-10-31T15:00:00Z</dc:date>
    </item>
    <item>
      <title>Lane Segmentation Data Augmentation for Heavy Rain Sensor Blockage using Realistically Translated Raindrop Images and CARLA Simulator</title>
      <link>https://scholar.dgist.ac.kr/handle/20.500.11750/57060</link>
      <description>Title: Lane Segmentation Data Augmentation for Heavy Rain Sensor Blockage using Realistically Translated Raindrop Images and CARLA Simulator
Author(s): Pahk, Jinu; Park, Seongjeong; Shim, Jungseok; Son, Sungho; Lee, Jungki; An, Jinung; Lim, Yongseob; Choi, Gyeungho
Abstract: Lane segmentation and Lane Keeping Assist System (LKAS) play a vital role in autonomous driving. While deep learning technology has significantly improved the accuracy of lane segmentation, real-world driving scenarios present various challenges. In particular, heavy rainfall not only obscures the road with sheets of rain and fog but also creates water droplets on the windshield or lens of the camera that affects the lane segmentation performance. There may even be a false positive problem in which the algorithm incorrectly recognizes a raindrop as a road lane. Collecting heavy rain data is challenging in real-world settings, and manual annotation of such data is expensive. In this research, we propose a realistic raindrop conversion process that employs a contrastive learning-based Generative Adversarial Network (GAN) model to transform raindrops randomly generated using Python libraries. In addition, we utilize the attention mask of the lane segmentation model to guide the placement of raindrops in training images from the translation target domain (real Rainy-Images). By training the ENet-SAD model using the realistically Translated-Raindrop images and lane ground truth automatically extracted from the CARLA Simulator, we observe an improvement in lane segmentation accuracy in Rainy-Images. This method enables training and testing of the perception model while adjusting the number, size, shape, and direction of raindrops, thereby contributing to future research on autonomous driving in adverse weather conditions. IEEE</description>
      <pubDate>Fri, 31 May 2024 15:00:00 GMT</pubDate>
      <guid isPermaLink="false">https://scholar.dgist.ac.kr/handle/20.500.11750/57060</guid>
      <dc:date>2024-05-31T15:00:00Z</dc:date>
    </item>
  </channel>
</rss>

