Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 4 de 4
Filter
Add more filters










Database
Language
Publication year range
1.
J Neural Eng ; 19(3)2022 05 27.
Article in English | MEDLINE | ID: mdl-35523129

ABSTRACT

Electroencephalogram (EEG)-based affective computing brain-computer interfaces provide the capability for machines to understand human intentions. In practice, people are more concerned with the strength of a certain emotional state over a short period of time, which was called as fine-grained-level emotion in this paper. In this study, we built a fine-grained-level emotion EEG dataset that contains two coarse-grained emotions and four corresponding fine-grained-level emotions. To fully extract the features of the EEG signals, we proposed a corresponding fine-grained emotion EEG network (FG-emotionNet) for spatial-temporal feature extraction. Each feature extraction layer is linked to raw EEG signals to alleviate overfitting and ensure that the spatial features of each scale can be extracted from the raw signals. Moreover, all previous scale features are fused before the current spatial-feature layer to enhance the scale features in the spatial block. Additionally, long short-term memory is adopted as the temporal block to extract the temporal features based on spatial features and classify the category of fine-grained emotions. Subject-dependent and cross-session experiments demonstrated that the performance of the proposed method is superior to that of the representative methods in emotion recognition and similar structure methods with proposed method.


Subject(s)
Brain-Computer Interfaces , Electroencephalography , Electroencephalography/methods , Emotions , Humans , Intention
2.
J Neural Eng ; 19(3)2022 05 13.
Article in English | MEDLINE | ID: mdl-35472762

ABSTRACT

Objective.The class imbalance problem considerably restricts the performance of electroencephalography (EEG) classification in the rapid serial visual presentation (RSVP) task. Existing solutions typically employ re-balancing strategies (e.g. re-weighting and re-sampling) to alleviate the impact of class imbalance, which enhances the classifier learning of deep networks but unexpectedly damages the representative ability of the learned deep features as original distributions become distorted.Approach.In this study, a novel decoupling representation learning (DRL) model, has been proposed that separates the representation learning and classification processes to capture the discriminative feature of imbalanced RSVP EEG data while classifying it accurately. The representation learning process is responsible for learning universal patterns for the classification of all samples, while the classifier determines a better bounding for the target and non-target classes. Specifically, the representation learning process adopts a dual-branch architecture, which minimizes the contrastive loss to regularize the representation space. In addition, to learn more discriminative information from RSVP EEG data, a novel multi-granular information based extractor is designed to extract spatial-temporal information. Considering the class re-balancing strategies can significantly promote classifier learning, the classifier was trained with re-balanced EEG data while freezing the parameters of the representation learning process.Main results.To evaluate the proposed method, experiments were conducted on two public datasets and one self-conducted dataset. The results demonstrate that the proposed DRL can achieve state-of-the-art performance for EEG classification in the RSVP task.Significance.This is the first study to focus on the class imbalance problem and propose a generic solution in the RSVP task. Furthermore, multi-granular data was explored to extract more complementary spatial-temporal information. The code is open-source and available athttps://github.com/Tammie-Li/DRL.


Subject(s)
Electroencephalography , Learning , Electroencephalography/methods
3.
IEEE Trans Biomed Eng ; 69(6): 1931-1942, 2022 06.
Article in English | MEDLINE | ID: mdl-34826293

ABSTRACT

Neuroscience studies have demonstrated the phase-locked characteristics of some early event-related potential (ERP) components evoked by stimuli. In this study, we propose a phase preservation neural network (PPNN) to learn phase information to improve the Electroencephalography (EEG) classification in a rapid serial visual presentation (RSVP) task. The PPNN consists of three major modules that can produce spatial and temporal representations with the high discriminative ability of the EEG features for classification. We first adopt a stack of dilated temporal convolution layers to extract temporal dynamics while avoiding the loss of phase information. Considering the intrinsic channel dependence of the EEG data, a spatial convolution layer is then applied to obtain the spatial-temporal representation of the input EEG signal. Finally, a fully connected layer is adopted to extract higher-level features for the final classification. The experiments are conducted on two public and one collected EEG datasets from the RSVP task, in which we evaluated the performance and explored the capability of phase preservation of our PPNN model and visualized the extracted features. The experimental results indicate the superiority of the proposed PPNN when compared with previous methods, suggesting the PPNN is a robust model for EEG classification in RSVP task.


Subject(s)
Brain-Computer Interfaces , Electroencephalography , Electroencephalography/methods , Evoked Potentials , Learning , Neural Networks, Computer
4.
J Neural Eng ; 18(4)2021 08 11.
Article in English | MEDLINE | ID: mdl-34256357

ABSTRACT

Objective.Directly decoding imagined speech from electroencephalogram (EEG) signals has attracted much interest in brain-computer interface applications, because it provides a natural and intuitive communication method for locked-in patients. Several methods have been applied to imagined speech decoding, but how to construct spatial-temporal dependencies and capture long-range contextual cues in EEG signals to better decode imagined speech should be considered.Approach.In this study, we propose a novel model called hybrid-scale spatial-temporal dilated convolution network (HS-STDCN) for EEG-based imagined speech recognition. HS-STDCN integrates feature learning from temporal and spatial information into a unified end-to-end model. To characterize the temporal dependencies of the EEG sequences, we adopted a hybrid-scale temporal convolution layer to capture temporal information at multiple levels. A depthwise spatial convolution layer was then designed to construct intrinsic spatial relationships of EEG electrodes, which can produce a spatial-temporal representation of the input EEG data. Based on the spatial-temporal representation, dilated convolution layers were further employed to learn long-range discriminative features for the final classification.Main results.To evaluate the proposed method, we compared the HS-STDCN with other existing methods on our collected dataset. The HS-STDCN achieved an averaged classification accuracy of 54.31% for decoding eight imagined words, which is significantly better than other methods at a significance level of 0.05.Significance.The proposed HS-STDCN model provided an effective approach to make use of both the temporal and spatial dependencies of the input EEG signals for imagined speech recognition. We also visualized the word semantic differences to analyze the impact of word semantics on imagined speech recognition, investigated the important regions in the decoding process, and explored the use of fewer electrodes to achieve comparable performance.


Subject(s)
Brain-Computer Interfaces , Speech , Electroencephalography , Humans
SELECTION OF CITATIONS
SEARCH DETAIL
...