<?xml version="1.0" encoding="UTF-8"?>
<rdf:RDF xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns="http://purl.org/rss/1.0/" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel rdf:about="https://scholar.dgist.ac.kr/handle/20.500.11750/12480">
    <title>Repository Collection: null</title>
    <link>https://scholar.dgist.ac.kr/handle/20.500.11750/12480</link>
    <description />
    <items>
      <rdf:Seq>
        <rdf:li rdf:resource="https://scholar.dgist.ac.kr/handle/20.500.11750/58975" />
        <rdf:li rdf:resource="https://scholar.dgist.ac.kr/handle/20.500.11750/58409" />
        <rdf:li rdf:resource="https://scholar.dgist.ac.kr/handle/20.500.11750/58408" />
        <rdf:li rdf:resource="https://scholar.dgist.ac.kr/handle/20.500.11750/58407" />
      </rdf:Seq>
    </items>
    <dc:date>2026-04-04T12:19:12Z</dc:date>
  </channel>
  <item rdf:about="https://scholar.dgist.ac.kr/handle/20.500.11750/58975">
    <title>Trained by demonstration humanoid robot controlled via a BCI system for telepresence</title>
    <link>https://scholar.dgist.ac.kr/handle/20.500.11750/58975</link>
    <description>Title: Trained by demonstration humanoid robot controlled via a BCI system for telepresence
Author(s): Saduanov, Batyrkhan; Alizadeh, Tohid; An, Jinung; Abibullaev, Berdakh
Abstract: Onerous life of paralyzed people is a substantial problem of the world society and improving their life quality would be a great achievement. This paper proposes a solution in this regard based on telepresence, where a patient perceives and interacts with a world through an embodiment of a robot controlled by a Brain-Computer Interface (BCI) system. The proposed approach brings together two leading techniques: Programming by Demonstration and BCI. Several tasks could be learned by the robot observing someone performing the function. The end user would issue commands to the robot, using a BCI system, concerning its movement and the tasks to be performed. An experiment is designed and conducted, verifying the applicability of the proposed approach. © 2018 IEEE.</description>
    <dc:date>2018-01-16T15:00:00Z</dc:date>
  </item>
  <item rdf:about="https://scholar.dgist.ac.kr/handle/20.500.11750/58409">
    <title>fNIRS Foundation Model for Few-Shot based fNIRS Classification</title>
    <link>https://scholar.dgist.ac.kr/handle/20.500.11750/58409</link>
    <description>Title: fNIRS Foundation Model for Few-Shot based fNIRS Classification
Author(s): Jung, Euijin; Lee, Hyunmin; An, Jinung
Abstract: Functional near-infrared spectroscopy (fNIRS) is a non-invasive technique with significant potential for applications in brain-computer interfaces (BCIs) including mental health diagnostics and cognitive state monitoring. However, the reliance on large labeled datasets for high-performing classification methods poses a critical challenge, given the time-consuming and resource-intensive nature of fNIRS data collection. To address this, we propose a novel foundation model for fNIRS data based on a self-supervised masked autoencoder framework. The proposed method enables efficient pre-training on unlabeled data, reducing the dependence on labeled datasets while maintaining robust performance for downstream tasks. Experimental results demonstrate that the proposed model achieves performance comparable to supervised learning approaches while requiring only one-third of the labeled training data. It consistently outperforms state-of-the-art self-supervised models in both linear probing and fine-tuning settings. Moreover, ablation studies show that a larger masking size aligns with the low-frequency nature of fNIRS signals, enabling the model to capture broader patterns and further enhance classification accuracy. These findings validate the proposed method as an effective and scalable solution for fNIRS-based classification tasks. © 2025 IEEE.</description>
    <dc:date>2025-02-23T15:00:00Z</dc:date>
  </item>
  <item rdf:about="https://scholar.dgist.ac.kr/handle/20.500.11750/58408">
    <title>Multimodal Classification of Motion Sickness Using EEG, fNIRS, and IMU Signals</title>
    <link>https://scholar.dgist.ac.kr/handle/20.500.11750/58408</link>
    <description>Title: Multimodal Classification of Motion Sickness Using EEG, fNIRS, and IMU Signals
Author(s): Lee, Hyunmin; Kim, Taehun; An, Jinung
Abstract: Motion sickness is characterized by nausea, dizziness, and vomiting, often caused by sensory conflict during passive motion. This study addresses the limitations of existing single-modal approaches by using a multimodal classification framework that integrates electroencephalography (EEG), functional near-infrared spectroscopy (fNIRS), and inertial measurement unit (IMU) signals. Data from 12 participants were analyzed using a transformer-based model. The EEG + fNIRS model achieved the highest k-fold cross-validation accuracy (79.51%) and AUC (85.36%) but had limited leave-one-subject-out performance (&lt;60%). Model interpretation identified EEG features, particularly from PO7, as the most critical, with IMU features such as Z-axis acceleration providing complementary information. While the approach demonstrates the potential of multimodal classification, challenges in intersubject generalization require further refinement. © 2025 IEEE.</description>
    <dc:date>2025-02-25T15:00:00Z</dc:date>
  </item>
  <item rdf:about="https://scholar.dgist.ac.kr/handle/20.500.11750/58407">
    <title>Brain Signal-Based Motion Sickness Classification in Automobile Passengers</title>
    <link>https://scholar.dgist.ac.kr/handle/20.500.11750/58407</link>
    <description>Title: Brain Signal-Based Motion Sickness Classification in Automobile Passengers
Author(s): Kim, Taehun; Lee, Hyunmin; An, Jinung
Abstract: Motion sickness remains a critical challenge for enhancing passenger comfort, particularly in autonomous vehicles, where non-driving activities are a primary benefit. This study investigates the multi-class classification of motion sickness levels using brain signals measured through 8-channel EEG and 1-channel fNIRS during general road driving scenarios, including motion sickness-inducing sections. A total of four deep learning models (CNN, LSTM, EEGNet, and Conformer) were employed, with the Conformer demonstrating superior performance. The results reveal that specific motion sickness levels, including the critical transition phase from mild to severe motion sickness, can be effectively identified using EEG and fNIRS data. EEG analysis highlighted distinct brain activation patterns across motion sickness levels, while fNIRS demonstrated higher classification accuracy due to its sensitivity to changes in cerebral blood flow caused by accelerations and decelerations. The combined use of EEG and fNIRS achieved the highest accuracy of 78.64%, demonstrating the synergistic potential of multi-modal data. © 2025 IEEE.</description>
    <dc:date>2025-02-25T15:00:00Z</dc:date>
  </item>
</rdf:RDF>

