Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 3 de 3
Filter
Add more filters










Database
Language
Publication year range
1.
Cogn Neurodyn ; 18(3): 863-875, 2024 Jun.
Article in English | MEDLINE | ID: mdl-38826642

ABSTRACT

The human brain can effectively perform Facial Expression Recognition (FER) with a few samples by utilizing its cognitive ability. However, unlike the human brain, even the well-trained deep neural network is data-dependent and lacks cognitive ability. To tackle this challenge, this paper proposes a novel framework, Brain Machine Generative Adversarial Networks (BM-GAN), which utilizes the concept of brain's cognitive ability to guide a Convolutional Neural Network to generate LIKE-electroencephalograph (EEG) features. More specifically, we firstly obtain EEG signals triggered from facial emotion images, then we adopt BM-GAN to carry out the mutual generation of image visual features and EEG cognitive features. BM-GAN intends to use the cognitive knowledge learnt from EEG signals to instruct the model to perceive LIKE-EEG features. Thereby, BM-GAN has a superior performance for FER like the human brain. The proposed model consists of VisualNet, EEGNet, and BM-GAN. More specifically, VisualNet can obtain image visual features from facial emotion images and EEGNet can obtain EEG cognitive features from EEG signals. Subsequently, the BM-GAN completes the mutual generation of image visual features and EEG cognitive features. Finally, the predicted LIKE-EEG features of test images are used for FER. After learning, without the participation of the EEG signals, an average classification accuracy of 96.6 % is obtained on Chinese Facial Affective Picture System dataset using LIKE-EEG features for FER. Experiments demonstrate that the proposed method can produce an excellent performance for FER.

2.
IEEE Trans Pattern Anal Mach Intell ; 45(9): 10703-10717, 2023 09.
Article in English | MEDLINE | ID: mdl-37030724

ABSTRACT

Neural network models of machine learning have shown promising prospects for visual tasks, such as facial emotion recognition (FER). However, the generalization of the model trained from a dataset with a few samples is limited. Unlike the machine, the human brain can effectively realize the required information from a few samples to complete the visual tasks. To learn the generalization ability of the brain, in this article, we propose a novel brain-machine coupled learning method for facial emotion recognition to let the neural network learn the visual knowledge of the machine and cognitive knowledge of the brain simultaneously. The proposed method utilizes visual images and electroencephalogram (EEG) signals to couple training the models in the visual and cognitive domains. Each domain model consists of two types of interactive channels, common and private. Since the EEG signals can reflect brain activity, the cognitive process of the brain is decoded by a model following reverse engineering. Decoding the EEG signals induced by the facial emotion images, the common channel in the visual domain can approach the cognitive process in the cognitive domain. Moreover, the knowledge specific to each domain is found in each private channel using an adversarial strategy. After learning, without the participation of the EEG signals, only the concatenation of both channels in the visual domain is used to classify facial emotion images based on the visual knowledge of the machine and the cognitive knowledge learned from the brain. Experiments demonstrate that the proposed method can produce excellent performance on several public datasets. Further experiments show that the proposed method trained from the EEG signals has good generalization ability on new datasets and can be applied to other network models, illustrating the potential for practical applications.


Subject(s)
Algorithms , Facial Recognition , Humans , Brain/diagnostic imaging , Emotions , Neural Networks, Computer , Electroencephalography/methods
3.
J Neurosci Methods ; 363: 109346, 2021 11 01.
Article in English | MEDLINE | ID: mdl-34474046

ABSTRACT

BACKGROUND: Rapid serial visual presentation (RSVP) based brain-computer interface (BCI) is widely used to categorize the target and non-target images. The available information limits the prediction accuracy of single-trial using single-subject electroencephalography (EEG) signals. New Method. Hyperscanning is a new manner to record two or more subjects' signals simultaneously. So we designed a multi-level information fusion model for target image detection based on dual-subject RSVP, namely HyperscanNet. The two modules of this model fuse the data and features of the two subjects at the data and feature layers. A chunked long and short-term memory artificial neural network (LSTM) was used in the time dimension to extract features at different periods separately, completing fine-grained underlying feature extraction. While the feature layer is fused, some plain operations are used to complete the fusion of the data layer to ensure that important information is not missed. RESULTS: Experimental results show that the F1-score (the harmonic mean of precision and recall) of this method with best group of channels and segment length is 82.76%. Comparison with existing methods. This method improves the F1-score by at least 5% compared to single-subject target detection. CONCLUSIONS: Target detection can be accomplished by the two subjects' collaboration to achieve a higher and more stable F1-score than a single subject.


Subject(s)
Brain-Computer Interfaces , Brain , Electroencephalography , Humans , Memory, Short-Term , Neural Networks, Computer
SELECTION OF CITATIONS
SEARCH DETAIL
...