Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 3 de 3
Filter
Add more filters










Database
Language
Publication year range
1.
Front Hum Neurosci ; 16: 973959, 2022.
Article in English | MEDLINE | ID: mdl-35992956

ABSTRACT

Electroencephalography (EEG) and functional near-infrared spectroscopy (fNIRS) have potentially complementary characteristics that reflect the electrical and hemodynamic characteristics of neural responses, so EEG-fNIRS-based hybrid brain-computer interface (BCI) is the research hotspots in recent years. However, current studies lack a comprehensive systematic approach to properly fuse EEG and fNIRS data and exploit their complementary potential, which is critical for improving BCI performance. To address this issue, this study proposes a novel multimodal fusion framework based on multi-level progressive learning with multi-domain features. The framework consists of a multi-domain feature extraction process for EEG and fNIRS, a feature selection process based on atomic search optimization, and a multi-domain feature fusion process based on multi-level progressive machine learning. The proposed method was validated on EEG-fNIRS-based motor imagery (MI) and mental arithmetic (MA) tasks involving 29 subjects, and the experimental results show that multi-domain features provide better classification performance than single-domain features, and multi-modality provides better classification performance than single-modality. Furthermore, the experimental results and comparison with other methods demonstrated the effectiveness and superiority of the proposed method in EEG and fNIRS information fusion, it can achieve an average classification accuracy of 96.74% in the MI task and 98.42% in the MA task. Our proposed method may provide a general framework for future fusion processing of multimodal brain signals based on EEG-fNIRS.

2.
Front Neurorobot ; 16: 823435, 2022.
Article in English | MEDLINE | ID: mdl-35173597

ABSTRACT

Music can effectively improve people's emotions, and has now become an effective auxiliary treatment method in modern medicine. With the rapid development of neuroimaging, the relationship between music and brain function has attracted much attention. In this study, we proposed an integrated framework of multi-modal electroencephalogram (EEG) and functional near infrared spectroscopy (fNIRS) from data collection to data analysis to explore the effects of music (especially personal preferred music) on brain activity. During the experiment, each subject was listening to two different kinds of music, namely personal preferred music and neutral music. In analyzing the synchronization signals of EEG and fNIRS, we found that music promotes the activity of the brain (especially the prefrontal lobe), and the activation induced by preferred music is stronger than that of neutral music. For the multi-modal features of EEG and fNIRS, we proposed an improved Normalized-ReliefF method to fuse and optimize them and found that it can effectively improve the accuracy of distinguishing between the brain activity evoked by preferred music and neutral music (up to 98.38%). Our work provides an objective reference based on neuroimaging for the research and application of personalized music therapy.

3.
Comput Biol Med ; 141: 105048, 2022 02.
Article in English | MEDLINE | ID: mdl-34838262

ABSTRACT

Domain adaptation (DA) tackles the problem where data from the source domain and target domain have different underlying distributions. In cross-domain (cross-subject or cross-dataset) emotion recognition based on EEG signals, traditional classification methods lack domain adaptation capabilities and have low performance. To address this problem, we proposed a novel domain adaptation strategy called adversarial discriminative-temporal convolutional networks (AD-TCNs) in this study, which can ensure the invariance of the representation of feature graphs in different domains and fill in the differences between different domains. For EEG data with specific temporal attributes, the temporal model TCN is used as the feature encoder. In the cross-subject experiment, our AD-TCN method achieved the highest accuracies of the valence and arousal dimensions in both the DREAMER and DEAP datasets. In the cross-dataset experiment, two of the eight task groups showed accuracies of 62.65% and 62.36%. Compared with the state-of-the-art performance in the same protocol, experimental results demonstrated that our method is an effective extension to realize EEG-based cross-domain emotion recognition.


Subject(s)
Electroencephalography , Neural Networks, Computer , Arousal , Emotions
SELECTION OF CITATIONS
SEARCH DETAIL
...