<?xml version="1.0" encoding="UTF-8"?>
<feed xmlns="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <title>Repository Collection: null</title>
  <link rel="alternate" href="https://scholar.dgist.ac.kr/handle/20.500.11750/12479" />
  <subtitle />
  <id>https://scholar.dgist.ac.kr/handle/20.500.11750/12479</id>
  <updated>2026-04-05T05:15:23Z</updated>
  <dc:date>2026-04-05T05:15:23Z</dc:date>
  <entry>
    <title>EFRM: A Multimodal EEG–fNIRS Representation-learning Model for few-shot brain-signal classification</title>
    <link rel="alternate" href="https://scholar.dgist.ac.kr/handle/20.500.11750/59261" />
    <author>
      <name>Jung, Euijin</name>
    </author>
    <author>
      <name>An, Jinung</name>
    </author>
    <id>https://scholar.dgist.ac.kr/handle/20.500.11750/59261</id>
    <updated>2025-12-03T07:40:11Z</updated>
    <published>2025-11-30T15:00:00Z</published>
    <summary type="text">Title: EFRM: A Multimodal EEG–fNIRS Representation-learning Model for few-shot brain-signal classification
Author(s): Jung, Euijin; An, Jinung
Abstract: Recent advances in brain signal analysis highlight the need for robust classifiers that can be trained with minimal labeled data. To meet this demand, transfer learning has emerged as a promising strategy: large-scale unlabeled data is used to train pre-trained models, which are later adapted with minimal labeled data. However, while most existing transfer learning studies focus primarily on electroencephalography (EEG) signals, their generalization to other brain signal modalities such as functional near-infrared spectroscopy (fNIRS) remains limited. To address this issue, we propose a multimodal representation model compatible with EEG-only, fNIRS-only, and paired EEG–fNIRS datasets. The proposed method consists of two stages: a pre-training stage that learns both modality-specific and shared representations across EEG and fNIRS, followed by a transfer learning stage adapted to specific downstream tasks. By leveraging the shared domain across EEG and fNIRS, our model outperforms single-modality approaches. We constructed pre-training datasets containing approximately 1250 h of brain signal recordings from 918 participants. Unlike previous multimodal approaches that require both EEG and fNIRS data for training, our method enables adaptation to single-modality datasets, enhancing flexibility and practicality. Experimental results demonstrate that our method achieves competitive performance in comparison with state-of-the-art supervised learning models, even with minimal labeled data. Our method also outperforms previously pre-trained models, showing especially significant improvements in fNIRS classification performance.</summary>
    <dc:date>2025-11-30T15:00:00Z</dc:date>
  </entry>
  <entry>
    <title>CARLA 시뮬레이터 기반 합성 평가 데이터셋을 활용한 극한 폭우 상황에서의 심층 신경망을 이용한 차선 인식 성능 평가</title>
    <link rel="alternate" href="https://scholar.dgist.ac.kr/handle/20.500.11750/57801" />
    <author>
      <name>전현재</name>
    </author>
    <author>
      <name>박성정</name>
    </author>
    <author>
      <name>손성호</name>
    </author>
    <author>
      <name>이정기</name>
    </author>
    <author>
      <name>안진웅</name>
    </author>
    <author>
      <name>최경호</name>
    </author>
    <author>
      <name>임용섭</name>
    </author>
    <id>https://scholar.dgist.ac.kr/handle/20.500.11750/57801</id>
    <updated>2025-07-25T02:47:07Z</updated>
    <published>2024-11-30T15:00:00Z</published>
    <summary type="text">Title: CARLA 시뮬레이터 기반 합성 평가 데이터셋을 활용한 극한 폭우 상황에서의 심층 신경망을 이용한 차선 인식 성능 평가
Author(s): 전현재; 박성정; 손성호; 이정기; 안진웅; 최경호; 임용섭
Abstract: Autonomous driving technology nowadays targets to level 4 or beyond, but the researchers are faced with  some limitations for developing reliable driving algorithms in diverse challenges. To promote the autonomous  vehicles to spread widely, it is important to properly deal with the safety issues on this technology. Among  various safety concerns, the sensor blockage problem by severe weather conditions can be one of the most  frequent  threats  for  lane  de-tection  algorithms  during  autonomous  driving.  To  handle  this  problem,  the  importance of the generation of proper datasets is becoming more significant. In this paper, a synthetic lane  dataset with sensor blockage is suggested in the format of lane detection evaluation. Rain streaks for each  frame were made by an experimentally established equation. Using this dataset, the degradation of the diverse  lane detection methods has been verified. The trend of the per-formance degradation of deep neural network-  based  lane  detection  methods  has  been  analyzed  in  depth.  Finally,  the  limitation  and  the  future  directions  of  the  network-based  methods  were  presented.</summary>
    <dc:date>2024-11-30T15:00:00Z</dc:date>
  </entry>
  <entry>
    <title>Lane Segmentation Data Augmentation for Heavy Rain Sensor Blockage using Realistically Translated Raindrop Images and CARLA Simulator</title>
    <link rel="alternate" href="https://scholar.dgist.ac.kr/handle/20.500.11750/57060" />
    <author>
      <name>Pahk, Jinu</name>
    </author>
    <author>
      <name>Park, Seongjeong</name>
    </author>
    <author>
      <name>Shim, Jungseok</name>
    </author>
    <author>
      <name>Son, Sungho</name>
    </author>
    <author>
      <name>Lee, Jungki</name>
    </author>
    <author>
      <name>An, Jinung</name>
    </author>
    <author>
      <name>Lim, Yongseob</name>
    </author>
    <author>
      <name>Choi, Gyeungho</name>
    </author>
    <id>https://scholar.dgist.ac.kr/handle/20.500.11750/57060</id>
    <updated>2025-07-25T03:29:52Z</updated>
    <published>2024-05-31T15:00:00Z</published>
    <summary type="text">Title: Lane Segmentation Data Augmentation for Heavy Rain Sensor Blockage using Realistically Translated Raindrop Images and CARLA Simulator
Author(s): Pahk, Jinu; Park, Seongjeong; Shim, Jungseok; Son, Sungho; Lee, Jungki; An, Jinung; Lim, Yongseob; Choi, Gyeungho
Abstract: Lane segmentation and Lane Keeping Assist System (LKAS) play a vital role in autonomous driving. While deep learning technology has significantly improved the accuracy of lane segmentation, real-world driving scenarios present various challenges. In particular, heavy rainfall not only obscures the road with sheets of rain and fog but also creates water droplets on the windshield or lens of the camera that affects the lane segmentation performance. There may even be a false positive problem in which the algorithm incorrectly recognizes a raindrop as a road lane. Collecting heavy rain data is challenging in real-world settings, and manual annotation of such data is expensive. In this research, we propose a realistic raindrop conversion process that employs a contrastive learning-based Generative Adversarial Network (GAN) model to transform raindrops randomly generated using Python libraries. In addition, we utilize the attention mask of the lane segmentation model to guide the placement of raindrops in training images from the translation target domain (real Rainy-Images). By training the ENet-SAD model using the realistically Translated-Raindrop images and lane ground truth automatically extracted from the CARLA Simulator, we observe an improvement in lane segmentation accuracy in Rainy-Images. This method enables training and testing of the perception model while adjusting the number, size, shape, and direction of raindrops, thereby contributing to future research on autonomous driving in adverse weather conditions. IEEE</summary>
    <dc:date>2024-05-31T15:00:00Z</dc:date>
  </entry>
  <entry>
    <title>Enhancing lane detection with a lightweight collaborative late fusion model</title>
    <link rel="alternate" href="https://scholar.dgist.ac.kr/handle/20.500.11750/56992" />
    <author>
      <name>Jahn, Lennart Lorenz Freimuth</name>
    </author>
    <author>
      <name>Park, Seongjeong</name>
    </author>
    <author>
      <name>Lim, Yongseob</name>
    </author>
    <author>
      <name>An, Jinung</name>
    </author>
    <author>
      <name>Choi, Gyeungho</name>
    </author>
    <id>https://scholar.dgist.ac.kr/handle/20.500.11750/56992</id>
    <updated>2025-07-25T02:44:43Z</updated>
    <published>2024-04-30T15:00:00Z</published>
    <summary type="text">Title: Enhancing lane detection with a lightweight collaborative late fusion model
Author(s): Jahn, Lennart Lorenz Freimuth; Park, Seongjeong; Lim, Yongseob; An, Jinung; Choi, Gyeungho
Abstract: Research in autonomous systems is gaining popularity both in academia and industry. These systems offer comfort, new business opportunities such as self-driving taxis, more efficient resource utilization through car-sharing, and most importantly, enhanced road safety. Different forms of Vehicle-to-Everything (V2X) communication have been under development for many years to enhance safety. Advances in wireless technologies have enabled more data transmission with lower latency, creating more possibilities for safer driving. Collaborative perception is a critical technique to address occlusion and sensor failure issues in autonomous driving. To enhance safety and efficiency, recent works have focused on sharing extracted features instead of raw data or final outputs, leading to reduced message sizes compared to raw sensor data. Reducing message size is important to enable collaborative perception to coexist with other V2X applications on bandwidth-limited communication devices. To address this issue and significantly reduce the size of messages sent while maintaining high accuracy, we propose our model: LaCPF (Late Collaborative Perception Fusion), which uses deep learning for late fusion. We demonstrate that we can achieve better results while using only half the message size over other methods. Our late fusion framework is also independent of the local perception model, which is essential, as not all vehicles on the road will employ the same methods. Therefore LaCPF can be scaled more quickly as it is model and sensor-agnostic. © 2024 The Authors</summary>
    <dc:date>2024-04-30T15:00:00Z</dc:date>
  </entry>
</feed>

