Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 5 de 5
Filter
Add more filters










Database
Language
Publication year range
1.
Article in English | MEDLINE | ID: mdl-38082732

ABSTRACT

In this study, we employed transfer learning to overcome the challenge of limited data availability in EEG-based emotion detection. The base model used in this study was Resnet50. Additionally, we employed a novel feature combination in EEG-based emotion detection. The input to the model was in the form of an image matrix, which comprised Mean Phase Coherence (MPC) and Magnitude Squared Coherence (MSC) in the upper-triangular and lower-triangular matrices, respectively. We further improved the technique by incorporating features obtained from the Differential Entropy (DE) into the diagonal. The dataset used in this study, SEED EEG (62 channel EEG), comprises three classes (Positive, Neutral, and Negative). We calculated both subject-independent and subject-dependent accuracy. The subject-dependent accuracy was obtained using a 10-fold cross-validation method and was 93.1%, while the subject-independent classification was performed by employing the leave-one-subject-out (LOSO) strategy. The accuracy obtained in subject-independent classification was 71.6%. Both of these accuracies are at least twice better than the chance accuracy of classifying 3 classes. The study found the use of MSC and MPC in EEG-based emotion detection promising for emotion classification. The future scope of this work includes the use of data augmentation techniques, enhanced classifiers, and better features for emotion classification.


Subject(s)
Electroencephalography , Emotions , Electroencephalography/methods , Learning , Entropy , Machine Learning
2.
Annu Int Conf IEEE Eng Med Biol Soc ; 2021: 459-462, 2021 11.
Article in English | MEDLINE | ID: mdl-34891332

ABSTRACT

Phonemes are classified into different categories based on the place and manner of articulation. We investigate the differences between the neural correlates of imagined nasal and bilabial consonants (distinct phonological categories). Mean phase coherence is used as a metric for measuring the phase synchronisation between pairs of electrodes in six cortical regions (auditory, motor, prefrontal, sensorimotor, so-matosensory and premotor) during the imagery of nasal and bilabial consonants. Statistically significant difference at 95% confidence interval is observed in beta and lower-gamma bands in various cortical regions. Our observations are inline with the directions into velocities of articulators and dual stream prediction models and support the hypothesis that phonological categories not only exist in articulated speech but can also be distinguished from the EEG of imagined speech.


Subject(s)
Speech Perception , Speech , Electroencephalography
3.
Annu Int Conf IEEE Eng Med Biol Soc ; 2021: 2226-2229, 2021 11.
Article in English | MEDLINE | ID: mdl-34891729

ABSTRACT

Phonological categories in articulated speech are defined based on the place and manner of articulation. In this work, we investigate whether the phonological categories of the prompts imagined during speech imagery lead to differences in phase synchronization in various cortical regions that can be discriminated from the EEG captured during the imagination. Nasal and bilabial consonant are the two phonological categories considered due to their differences in both place and manner of articulation. Mean phase coherence (MPC) is used for measuring the phase synchronization and shallow neural network (NN) is used as the classifier. As a benchmark, we have also designed another NN based on statistical parameters extracted from imagined speech EEG. The NN trained on MPC values in the beta band gives classification results superior to NN trained on alpha band MPC values, gamma band MPC values and statistical parameters extracted from the EEG.Clinical relevance: Brain-computer interface (BCI) is a promising tool for aiding differently-abled people and for neurorehabilitation. One of the challenges in designing speech imagery based BCI is the identification of speech prompts that can lead to distinct neural activations. We have shown that nasal and blilabial consonants lead to dissimilar activations. Hence prompts orthogonal in these phonological categories are good choices as speech imagery prompts.


Subject(s)
Brain-Computer Interfaces , Speech , Electroencephalography , Humans , Imagery, Psychotherapy , Imagination
4.
Front Neurosci ; 15: 642251, 2021.
Article in English | MEDLINE | ID: mdl-33994922

ABSTRACT

Over the past decade, many researchers have come up with different implementations of systems for decoding covert or imagined speech from EEG (electroencephalogram). They differ from each other in several aspects, from data acquisition to machine learning algorithms, due to which, a comparison between different implementations is often difficult. This review article puts together all the relevant works published in the last decade on decoding imagined speech from EEG into a single framework. Every important aspect of designing such a system, such as selection of words to be imagined, number of electrodes to be recorded, temporal and spatial filtering, feature extraction and classifier are reviewed. This helps a researcher to compare the relative merits and demerits of the different approaches and choose the one that is most optimal. Speech being the most natural form of communication which human beings acquire even without formal education, imagined speech is an ideal choice of prompt for evoking brain activity patterns for a BCI (brain-computer interface) system, although the research on developing real-time (online) speech imagery based BCI systems is still in its infancy. Covert speech based BCI can help people with disabilities to improve their quality of life. It can also be used for covert communication in environments that do not support vocal communication. This paper also discusses some future directions, which will aid the deployment of speech imagery based BCI for practical applications, rather than only for laboratory experiments.

5.
Annu Int Conf IEEE Eng Med Biol Soc ; 2020: 5106-5110, 2020 07.
Article in English | MEDLINE | ID: mdl-33019135

ABSTRACT

Amblyopia is a medical condition in which the visual inputs from one of the eyes is suppressed by the brain. This leads to reduced visual acuity and poor or complete loss of stereopsis. Conventional clinical tests such as Worth 4-dot test and Bagolini striated lens test can only detect the presence of suppression but cannot quantify the extent of suppression, which is important for identifying the effectiveness of treatments for amblyopia. A novel approach for quantifying the level of suppression in amblyopia is proposed in this paper. We hypothesize that the level of suppression in amblyopia can be measured by measuring the symmetry/asymmetry in the suppression experienced during a dichoptic image recognition task. Preliminary studies done on fifty one normal subjects prove that the differences between the accuracies of the left and right eyes can be used as a measure of asymmetry. Equivalence test performed using 'two-one-sided t-tests' procedure shows that the equivalence of the accuracies of left and right eyes for normal subjects is statistically significant (p = .03, symmetric equivalence margin of 5 percentage points). To validate this method, six amblyopic children underwent this test and the results obtained are promising. To the knowledge of the authors, this is the first work to make use of VR glasses and dichoptic image recognition task for quantifying the level of ocular suppression in amblyopic patients.


Subject(s)
Amblyopia , Virtual Reality , Amblyopia/diagnosis , Child , Eyeglasses , Humans , Vision, Binocular , Visual Acuity
SELECTION OF CITATIONS
SEARCH DETAIL
...