Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 4 de 4
Filter
Add more filters










Database
Language
Publication year range
1.
IEEE J Biomed Health Inform ; 27(10): 4758-4767, 2023 10.
Article in English | MEDLINE | ID: mdl-37540609

ABSTRACT

Recently, electroencephalographic (EEG) emotion recognition attract attention in the field of human-computer interaction (HCI). However, most of the existing EEG emotion datasets primarily consist of data from normal human subjects. To enhance diversity, this study aims to collect EEG signals from 30 hearing-impaired subjects while they watch video clips displaying six different emotions (happiness, inspiration, neutral, anger, fear, and sadness). The frequency domain feature matrix of EEG signals, which comprise power spectral density (PSD) and differential entropy (DE), were up-sampled using cubic spline interpolation to capture the correlation among different channels. To select emotion representation information from both global and localized brain regions, a novel method called Shifted EEG Channel Transformer (SECT) was proposed. The SECT method consists of two layers: the first layer utilizes the traditional channel Transformer (CT) structure to process information from global brain regions, while the second layer acquires localized information from centrally symmetrical and reorganized brain regions by shifted channel Transformer (S-CT). We conducted a subject-dependent experiment, and the accuracy of the PSD and DE features reached 82.51% and 84.76%, respectively, for the six kinds of emotion classification. Moreover, subject-independent experiments were conducted on a public dataset, yielding accuracies of 85.43% (3-classification, SEED), 66.83% (2-classification on Valence, DEAP), and 65.31% (2-classification on Arouse, DEAP), respectively.


Subject(s)
Brain , Emotions , Humans , Electroencephalography/methods , Fear
2.
Article in English | MEDLINE | ID: mdl-36455076

ABSTRACT

Emotion analysis has been employed in many fields such as human-computer interaction, rehabilitation, and neuroscience. But most emotion analysis methods mainly focus on healthy controls or depression patients. This paper aims to classify the emotional expressions in individuals with hearing impairment based on EEG signals and facial expressions. Two kinds of signals were collected simultaneously when the subjects watched affective video clips, and we labeled the video clips with discrete emotional states (fear, happiness, calmness, and sadness). We extracted the differential entropy (DE) features based on EEG signals and converted DE features into EEG topographic maps (ETM). Next, the ETM and facial expressions were fused by the multichannel fusion method. Finally, a deep learning classifier CBAM_ResNet34 combined Residual Network (ResNet) and Convolutional Block Attention Module (CBAM) was used for subject-dependent emotion classification. The results show that the average classification accuracy of four emotions recognition after multimodal fusion achieves 78.32%, which is higher than 67.90% for facial expressions and 69.43% for EEG signals. Moreover, visualization by the Gradient-weighted Class Activation Mapping (Grad-CAM) of ETM showed that the prefrontal, temporal and occipital lobes were the brain regions closely related to emotional changes in individuals with hearing impairment.


Subject(s)
Facial Expression , Hearing Loss , Humans , Electroencephalography/methods , Emotions/physiology , Brain
3.
Comput Biol Med ; 152: 106344, 2023 01.
Article in English | MEDLINE | ID: mdl-36470142

ABSTRACT

In recent years, emotion recognition based on electroencephalography (EEG) signals has attracted plenty of attention. Most of the existing works focused on normal or depressed people. Due to the lack of hearing ability, it is difficult for hearing-impaired people to express their emotions through language in their social activities. In this work, we collected the EEG signals of hearing-impaired subjects when they were watching six kinds of emotional video clips (happiness, inspiration, neutral, anger, fear, and sadness) for emotion recognition. The biharmonic spline interpolation method was utilized to convert the traditional frequency domain features, Differential Entropy (DE), Power Spectral Density (PSD), and Wavelet Entropy (WE) into the spatial domain. The patch embedding (PE) method was used to segment the feature map into the same patch to obtain the differences in the distribution of emotional information among brain regions. For feature classification, a compact residual network with Depthwise convolution (DC) and Pointwise convolution (PC) is proposed to separate spatial and channel mixing dimensions to better extract information between channels. Dependent subject experiments based on 70% training sets and 30% testing sets were performed. The results showed that the average classification accuracies by PE (DE), PE (PSD), and PE (WE) were 91.75%, 85.53%, and 75.68%, respectively which were improved by 11.77%, 23.54%, and 16.61% compared with DE, PSD, and WE. Moreover, the comparison experiments were carried out on the SEED and DEAP datasets with PE (DE), which achieved average accuracies of 90.04% (positive, neutral, and negative) and 88.75% (high valence and low valence). By exploring the emotional brain regions, we found that the frontal, parietal, and temporal lobes of hearing-impaired people were associated with emotional activity compared to normal people whose main emotional brain area was the frontal lobe.


Subject(s)
Algorithms , Emotions , Adult , Humans , Emotions/physiology , Brain , Electroencephalography/methods , Hearing
4.
IEEE J Biomed Health Inform ; 27(1): 363-373, 2023 01.
Article in English | MEDLINE | ID: mdl-36201412

ABSTRACT

Recent research on emotion recognition suggests that deep network-based adversarial learning has an ability to solve the cross-subject problem of emotion recognition. This study constructed a hearing-impaired electroencephalography (EEG) emotion dataset containing three emotions (positive, neutral, and negative) in 15 subjects. The emotional domain adversarial neural network (EDANN) was carried out to identify hearing-impaired subjects' emotions by learning hidden emotion information between the labeled data and the data with no-label. For the input data, we propose a spatial filter matrix to reduce the overfitting of the training data. A feature extraction network 3DLSTM-ConvNET was used to extract comprehensive emotional information from the time, frequency, and spatial dimensions. Moreover, emotion local domain discriminator and emotion film group local domain discriminator were added to reduce the distribution distance between the same kinds of emotions and different film groups, respectively. According to the experimental results, the average accuracy of subject-dependent is 0.984 (STD: 0.011), and that of subject-independent is 0.679 (STD: 0.140). In addition, by analyzing the discrimination characteristics, we found that the brain regions with emotional recognition in the hearing-impaired are distributed in the wider areas of the parietal and occipital lobes, which may be caused by visual processing.


Subject(s)
Emotions , Persons With Hearing Impairments , Humans , Brain , Electroencephalography/methods , Hearing , Persons With Hearing Impairments/psychology , Nerve Net
SELECTION OF CITATIONS
SEARCH DETAIL
...