Detail View

EFRM: A Multimodal EEG–fNIRS Representation-learning Model for few-shot brain-signal classification
Citations

WEB OF SCIENCE

Citations

SCOPUS

Metadata Downloads

DC Field Value Language
dc.contributor.author Jung, Euijin -
dc.contributor.author An, Jinung -
dc.date.accessioned 2025-12-01T11:40:09Z -
dc.date.available 2025-12-01T11:40:09Z -
dc.date.created 2025-11-20 -
dc.date.issued 2025-12 -
dc.identifier.issn 0010-4825 -
dc.identifier.uri https://scholar.dgist.ac.kr/handle/20.500.11750/59261 -
dc.description.abstract Recent advances in brain signal analysis highlight the need for robust classifiers that can be trained with minimal labeled data. To meet this demand, transfer learning has emerged as a promising strategy: large-scale unlabeled data is used to train pre-trained models, which are later adapted with minimal labeled data. However, while most existing transfer learning studies focus primarily on electroencephalography (EEG) signals, their generalization to other brain signal modalities such as functional near-infrared spectroscopy (fNIRS) remains limited. To address this issue, we propose a multimodal representation model compatible with EEG-only, fNIRS-only, and paired EEG–fNIRS datasets. The proposed method consists of two stages: a pre-training stage that learns both modality-specific and shared representations across EEG and fNIRS, followed by a transfer learning stage adapted to specific downstream tasks. By leveraging the shared domain across EEG and fNIRS, our model outperforms single-modality approaches. We constructed pre-training datasets containing approximately 1250 h of brain signal recordings from 918 participants. Unlike previous multimodal approaches that require both EEG and fNIRS data for training, our method enables adaptation to single-modality datasets, enhancing flexibility and practicality. Experimental results demonstrate that our method achieves competitive performance in comparison with state-of-the-art supervised learning models, even with minimal labeled data. Our method also outperforms previously pre-trained models, showing especially significant improvements in fNIRS classification performance. -
dc.language English -
dc.publisher Elsevier -
dc.title EFRM: A Multimodal EEG–fNIRS Representation-learning Model for few-shot brain-signal classification -
dc.type Article -
dc.identifier.doi 10.1016/j.compbiomed.2025.111292 -
dc.identifier.scopusid 2-s2.0-105021246877 -
dc.identifier.bibliographicCitation Computers in Biology and Medicine, v.199 -
dc.description.isOpenAccess FALSE -
dc.subject.keywordAuthor EEG -
dc.subject.keywordAuthor fNIRS -
dc.subject.keywordAuthor Multimodal representation learning -
dc.subject.keywordAuthor Transfer learning -
dc.subject.keywordAuthor Few-shot learning -
dc.citation.title Computers in Biology and Medicine -
dc.citation.volume 199 -
dc.description.journalRegisteredClass scie -
dc.description.journalRegisteredClass scopus -
dc.type.docType Article -
Show Simple Item Record

File Downloads

  • There are no files associated with this item.

공유

qrcode
공유하기

Related Researcher

안진웅
An, Jinung안진웅

Division of Intelligent Robotics

read more

Total Views & Downloads