Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 6 de 6
Filter
Add more filters










Database
Language
Publication year range
1.
IEEE J Biomed Health Inform ; 27(10): 4758-4767, 2023 10.
Article in English | MEDLINE | ID: mdl-37540609

ABSTRACT

Recently, electroencephalographic (EEG) emotion recognition attract attention in the field of human-computer interaction (HCI). However, most of the existing EEG emotion datasets primarily consist of data from normal human subjects. To enhance diversity, this study aims to collect EEG signals from 30 hearing-impaired subjects while they watch video clips displaying six different emotions (happiness, inspiration, neutral, anger, fear, and sadness). The frequency domain feature matrix of EEG signals, which comprise power spectral density (PSD) and differential entropy (DE), were up-sampled using cubic spline interpolation to capture the correlation among different channels. To select emotion representation information from both global and localized brain regions, a novel method called Shifted EEG Channel Transformer (SECT) was proposed. The SECT method consists of two layers: the first layer utilizes the traditional channel Transformer (CT) structure to process information from global brain regions, while the second layer acquires localized information from centrally symmetrical and reorganized brain regions by shifted channel Transformer (S-CT). We conducted a subject-dependent experiment, and the accuracy of the PSD and DE features reached 82.51% and 84.76%, respectively, for the six kinds of emotion classification. Moreover, subject-independent experiments were conducted on a public dataset, yielding accuracies of 85.43% (3-classification, SEED), 66.83% (2-classification on Valence, DEAP), and 65.31% (2-classification on Arouse, DEAP), respectively.


Subject(s)
Brain , Emotions , Humans , Electroencephalography/methods , Fear
2.
Forensic Sci Int ; 346: 111667, 2023 May.
Article in English | MEDLINE | ID: mdl-37003122

ABSTRACT

In this study, a new complementary Y-STR system that includes 31 loci was developed (DYS522, DYS388, DYF387S1a/b, DYS510, DYS587, DYS645, DYS531, DYS593, DYS617, GATA_A10, DYS622, DYS552, DYS508, DYS447, DYS527a/b, DYS446, DYS459a/b, DYS444, DYS557, DYS443, DYS626, DYS630, DYS526a, DYF404S1a/b, DYS520, DYS518, and DYS526b). This 31-plex Y-STR system, SureID® Y-comp, is designed for biological samples from forensic casework and reference samples from forensic DNA database. To validate the suitability of this novel kit, many developmental works including size precision testing, sensitivity, male specificity testing, species specificity, PCR inhibitors, stutter precision, reproducibility, suitability for use on DNA mixture and parallel testing of different capillary electrophoresis devices were performed. Mutation rates were investigated using 295 DNA-confirmed father-son pairs. The results demonstrate that the SureID® Y-comp Kit is time-efficient, accurate, and reliable for various case-type samples. It possessed a higher discrimination power and can be a stand-alone kit for male identification. Moreover, the simply acquired additional Y-STR loci will be conductive to construct a robust database. Even if various commercial Y-STR kits are used in distinct forensic laboratories, a wider trans-database retrieval will become feasible with the effort of the SureID® Y-comp Kit.


Subject(s)
Forensic Medicine , Microsatellite Repeats , Male , Humans , Reproducibility of Results , Polymerase Chain Reaction , DNA/analysis , Chromosomes, Human, Y , DNA Fingerprinting
3.
Article in English | MEDLINE | ID: mdl-36455076

ABSTRACT

Emotion analysis has been employed in many fields such as human-computer interaction, rehabilitation, and neuroscience. But most emotion analysis methods mainly focus on healthy controls or depression patients. This paper aims to classify the emotional expressions in individuals with hearing impairment based on EEG signals and facial expressions. Two kinds of signals were collected simultaneously when the subjects watched affective video clips, and we labeled the video clips with discrete emotional states (fear, happiness, calmness, and sadness). We extracted the differential entropy (DE) features based on EEG signals and converted DE features into EEG topographic maps (ETM). Next, the ETM and facial expressions were fused by the multichannel fusion method. Finally, a deep learning classifier CBAM_ResNet34 combined Residual Network (ResNet) and Convolutional Block Attention Module (CBAM) was used for subject-dependent emotion classification. The results show that the average classification accuracy of four emotions recognition after multimodal fusion achieves 78.32%, which is higher than 67.90% for facial expressions and 69.43% for EEG signals. Moreover, visualization by the Gradient-weighted Class Activation Mapping (Grad-CAM) of ETM showed that the prefrontal, temporal and occipital lobes were the brain regions closely related to emotional changes in individuals with hearing impairment.


Subject(s)
Facial Expression , Hearing Loss , Humans , Electroencephalography/methods , Emotions/physiology , Brain
4.
Comput Biol Med ; 152: 106344, 2023 01.
Article in English | MEDLINE | ID: mdl-36470142

ABSTRACT

In recent years, emotion recognition based on electroencephalography (EEG) signals has attracted plenty of attention. Most of the existing works focused on normal or depressed people. Due to the lack of hearing ability, it is difficult for hearing-impaired people to express their emotions through language in their social activities. In this work, we collected the EEG signals of hearing-impaired subjects when they were watching six kinds of emotional video clips (happiness, inspiration, neutral, anger, fear, and sadness) for emotion recognition. The biharmonic spline interpolation method was utilized to convert the traditional frequency domain features, Differential Entropy (DE), Power Spectral Density (PSD), and Wavelet Entropy (WE) into the spatial domain. The patch embedding (PE) method was used to segment the feature map into the same patch to obtain the differences in the distribution of emotional information among brain regions. For feature classification, a compact residual network with Depthwise convolution (DC) and Pointwise convolution (PC) is proposed to separate spatial and channel mixing dimensions to better extract information between channels. Dependent subject experiments based on 70% training sets and 30% testing sets were performed. The results showed that the average classification accuracies by PE (DE), PE (PSD), and PE (WE) were 91.75%, 85.53%, and 75.68%, respectively which were improved by 11.77%, 23.54%, and 16.61% compared with DE, PSD, and WE. Moreover, the comparison experiments were carried out on the SEED and DEAP datasets with PE (DE), which achieved average accuracies of 90.04% (positive, neutral, and negative) and 88.75% (high valence and low valence). By exploring the emotional brain regions, we found that the frontal, parietal, and temporal lobes of hearing-impaired people were associated with emotional activity compared to normal people whose main emotional brain area was the frontal lobe.


Subject(s)
Algorithms , Emotions , Adult , Humans , Emotions/physiology , Brain , Electroencephalography/methods , Hearing
5.
IEEE J Biomed Health Inform ; 26(2): 589-599, 2022 02.
Article in English | MEDLINE | ID: mdl-34170836

ABSTRACT

With the development of sensor technology and learning algorithms, multimodal emotion recognition has attracted widespread attention. Many existing studies on emotion recognition mainly focused on normal people. Besides, due to hearing loss, deaf people cannot express emotions by words, which may have a greater need for emotion recognition. In this paper, the deep belief network (DBN) was utilized to classify three category emotions through the electroencephalograph (EEG) and facial expressions. Signals from 15 deaf subjects were recorded when they watched the emotional movie clips. Our system uses a 1-s window without overlap to segment the EEG signals in five frequency bands, then the differential entropy (DE) feature is extracted. The DE feature of EEG and facial expression images plays as multimodal input for subject-dependent emotion recognition. To avoid feature redundancy, the top 12 major EEG electrode channels (FP2, FP1, FT7, FPZ, F7, T8, F8, CB2, CB1, FT8, T7, TP8) in the gamma band and 30 facial expression features (the areas around the eyes and eyebrow) which are selected by the largest weight values. The results show that the classification accuracy is 99.92% by feature selection in deaf emotion reignition. Moreover, investigations on brain activities reveal deaf brain activity changes mainly in the beta and gamma bands, and the brain regions that are affected by emotions are mainly distributed in the prefrontal and outer temporal lobes.


Subject(s)
Electroencephalography , Facial Expression , Brain , Cognition , Electroencephalography/methods , Emotions , Humans
6.
ACS Appl Mater Interfaces ; 8(42): 28904-28916, 2016 Oct 26.
Article in English | MEDLINE | ID: mdl-27696813

ABSTRACT

This paper reports a series of novel Ni-based metal-organic framework (Ni-MOFs) prepared by a facile solvothermal process. The synthetic conditions have great effects on the Ni-MOFs morphologies, porous textures, and their electrochemical performance. Improved capacitance performance was successfully realized by the in-situ hybrid of Ni-MOFs with graphene oxide (GO) nanosheets (Ni-MOFs@GO). The pseudocapacitance of ca. 1457.7 F/g for Ni-MOFs obtained at 180 °C with HCl as the modulator was elevated to ca. 2192.4 F/g at a current density of 1 A/g for the Ni-MOFs@GO with GO contents of 3 wt %. Additionally, the capacitance retention was also promoted from ca. 83.5% to 85.1% of its original capacitance at 10 A/g even after 3000 cycles accordingly. These outstanding electrochemical properties of Ni-based MOF materials may be related to their inherent characteristics, such as the unique flower-like architecture and fascinating synergetic effect between the Ni-MOFs and the GO nanosheets.

SELECTION OF CITATIONS
SEARCH DETAIL
...