Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 26.755
Filter
1.
Sci Rep ; 14(1): 15432, 2024 07 04.
Article in English | MEDLINE | ID: mdl-38965248

ABSTRACT

Previous research has primarily employed deep learning models such as Convolutional Neural Networks (CNNs), and Recurrent Neural Networks (RNNs) for decoding imagined character signals. These approaches have treated the temporal and spatial features of the signals in a sequential, parallel, or single-feature manner. However, there has been limited research on the cross-relationships between temporal and spatial features, despite the inherent association between channels and sampling points in Brain-Computer Interface (BCI) signal acquisition, which holds significant information about brain activity. To address the limited research on the relationships between temporal and spatial features, we proposed a Temporal-Spatial Cross-Attention Network model, named TSCA-Net. The TSCA-Net is comprised of four modules: the Temporal Feature (TF), the Spatial Feature (SF), the Temporal-Spatial Cross (TSCross), and the Classifier. The TF combines LSTM and Transformer to extract temporal features from BCI signals, while the SF captures spatial features. The TSCross is introduced to learn the correlations between the temporal and spatial features. The Classifier predicts the label of BCI data based on its characteristics. We validated the TSCA-Net model using publicly available datasets of handwritten characters, which recorded the spiking activity from two micro-electrode arrays (MEAs). The results showed that our proposed TSCA-Net outperformed other comparison models (EEG-Net, EEG-TCNet, S3T, GRU, LSTM, R-Transformer, and ViT) in terms of accuracy, precision, recall, and F1 score, achieving 92.66 % , 92.77 % , 92.70 % , and 92.58 % , respectively. The TSCA-Net model demonstrated a 3.65 % to 7.49 % improvement in accuracy over the comparison models.


Subject(s)
Brain-Computer Interfaces , Electroencephalography , Neural Networks, Computer , Humans , Electroencephalography/methods , Imagination/physiology , Brain/physiology , Attention/physiology , Deep Learning , Signal Processing, Computer-Assisted
2.
Int J Neural Syst ; 34(9): 2450046, 2024 Sep.
Article in English | MEDLINE | ID: mdl-39010724

ABSTRACT

This study proposes an innovative expert system that uses exclusively EEG signals to diagnose schizophrenia in its early stages. For diagnosing psychiatric/neurological disorders, electroencephalogram (EEG) testing is considered a financially viable, safe, and reliable alternative. Using the reconstructed phase space (RPS) and the continuous wavelet transform, the researchers maximized the differences between the EEG nonstationary signals of normal and schizophrenia individuals, which cannot be observed in the time, frequency, or time-frequency domains. This reveals significant information, highlighting more distinguishable features. Then, a deep learning network was trained to enhance the accuracy of the resulting image classification. The algorithm's efficacy was confirmed through three distinct methods: employing 70% of the dataset for training, 15% for validation, and the remaining 15% for testing. This was followed by a 5-fold cross-validation technique and a leave-one-out classification approach. Each method was iterated 100 times to ascertain the algorithm's robustness. The performance metrics derived from these tests - accuracy, precision, sensitivity, F1 score, Matthews correlation coefficient, and Kappa - indicated remarkable outcomes. The algorithm demonstrated steady performance across all evaluation strategies, underscoring its relevance and reliability. The outcomes validate the system's accuracy, precision, sensitivity, and robustness by showcasing its capability to autonomously differentiate individuals diagnosed with schizophrenia from those in a state of normal health.


Subject(s)
Deep Learning , Electroencephalography , Schizophrenia , Wavelet Analysis , Schizophrenia/diagnosis , Schizophrenia/physiopathology , Humans , Electroencephalography/methods , Algorithms , Adult , Reproducibility of Results , Signal Processing, Computer-Assisted , Neural Networks, Computer
3.
Sensors (Basel) ; 24(13)2024 Jun 21.
Article in English | MEDLINE | ID: mdl-39000810

ABSTRACT

The current study investigated the effectiveness of social robots in facilitating stress management interventions for university students by evaluating their physiological responses. We collected electroencephalogram (EEG) brain activity and Galvanic Skin Responses (GSRs) together with self-reported questionnaires from two groups of students who practiced a deep breathing exercise either with a social robot or a laptop. From GSR signals, we obtained the change in participants' arousal level throughout the intervention, and from the EEG signals, we extracted the change in their emotional valence using the neurometric of Frontal Alpha Asymmetry (FAA). While subjective perceptions of stress and user experience did not differ significantly between the two groups, the physiological signals revealed differences in their emotional responses as evaluated by the arousal-valence model. The Laptop group tended to show a decrease in arousal level which, in some cases, was accompanied by negative valence indicative of boredom or lack of interest. On the other hand, the Robot group displayed two patterns; some demonstrated a decrease in arousal with positive valence indicative of calmness and relaxation, and others showed an increase in arousal together with positive valence interpreted as excitement. These findings provide interesting insights into the impact of social robots as mental well-being coaches on students' emotions particularly in the presence of the novelty effect. Additionally, they provide evidence for the efficacy of physiological signals as an objective and reliable measure of user experience in HRI settings.


Subject(s)
Electroencephalography , Emotions , Galvanic Skin Response , Mental Health , Robotics , Stress, Psychological , Humans , Robotics/methods , Male , Female , Emotions/physiology , Electroencephalography/methods , Stress, Psychological/therapy , Stress, Psychological/physiopathology , Galvanic Skin Response/physiology , Young Adult , Adult , Surveys and Questionnaires , Arousal/physiology , Students/psychology
4.
Sensors (Basel) ; 24(13)2024 Jun 25.
Article in English | MEDLINE | ID: mdl-39000904

ABSTRACT

This study aims to demonstrate the feasibility of using a new wireless electroencephalography (EEG)-electromyography (EMG) wearable approach to generate characteristic EEG-EMG mixed patterns with mouth movements in order to detect distinct movement patterns for severe speech impairments. This paper describes a method for detecting mouth movement based on a new signal processing technology suitable for sensor integration and machine learning applications. This paper examines the relationship between the mouth motion and the brainwave in an effort to develop nonverbal interfacing for people who have lost the ability to communicate, such as people with paralysis. A set of experiments were conducted to assess the efficacy of the proposed method for feature selection. It was determined that the classification of mouth movements was meaningful. EEG-EMG signals were also collected during silent mouthing of phonemes. A few-shot neural network was trained to classify the phonemes from the EEG-EMG signals, yielding classification accuracy of 95%. This technique in data collection and processing bioelectrical signals for phoneme recognition proves a promising avenue for future communication aids.


Subject(s)
Electroencephalography , Electromyography , Signal Processing, Computer-Assisted , Wireless Technology , Humans , Electroencephalography/methods , Electroencephalography/instrumentation , Electromyography/methods , Electromyography/instrumentation , Wireless Technology/instrumentation , Mouth/physiopathology , Mouth/physiology , Adult , Male , Movement/physiology , Neural Networks, Computer , Speech Disorders/diagnosis , Speech Disorders/physiopathology , Female , Wearable Electronic Devices , Machine Learning
5.
Sensors (Basel) ; 24(13)2024 Jun 26.
Article in English | MEDLINE | ID: mdl-39000941

ABSTRACT

Functional Near Infrared Spectroscopy (fNIRS) and Electroencephalography (EEG) are commonly employed neuroimaging methods in developmental neuroscience. Since they offer complementary strengths and their simultaneous recording is relatively easy, combining them is highly desirable. However, to date, very few infant studies have been conducted with NIRS-EEG, partly because analyzing and interpreting multimodal data is challenging. In this work, we propose a framework to carry out a multivariate pattern analysis that uses an NIRS-EEG feature matrix, obtained by selecting EEG trials presented within larger NIRS blocks, and combining the corresponding features. Importantly, this classifier is intended to be sensitive enough to apply to individual-level, and not group-level data. We tested the classifier on NIRS-EEG data acquired from five newborn infants who were listening to human speech and monkey vocalizations. We evaluated how accurately the model classified stimuli when applied to EEG data alone, NIRS data alone, or combined NIRS-EEG data. For three out of five infants, the classifier achieved high and statistically significant accuracy when using features from the NIRS data alone, but even higher accuracy when using combined EEG and NIRS data, particularly from both hemoglobin components. For the other two infants, accuracies were lower overall, but for one of them the highest accuracy was still achieved when using combined EEG and NIRS data with both hemoglobin components. We discuss how classification based on joint NIRS-EEG data could be modified to fit the needs of different experimental paradigms and needs.


Subject(s)
Electroencephalography , Spectroscopy, Near-Infrared , Humans , Spectroscopy, Near-Infrared/methods , Electroencephalography/methods , Infant, Newborn , Infant , Male , Female , Brain/physiology , Brain/diagnostic imaging
6.
Sensors (Basel) ; 24(13)2024 Jun 27.
Article in English | MEDLINE | ID: mdl-39000946

ABSTRACT

Personal identification systems based on electroencephalographic (EEG) signals have their own strengths and limitations. The stability of EEG signals strongly affects such systems. The human emotional state is one of the important factors that affects EEG signals' stability. Stress is a major emotional state that affects individuals' capability to perform day-to-day tasks. The main objective of this work is to study the effect of mental and emotional stress on such systems. Two experiments have been performed. In the first, we used hand-crafted features (time domain, frequency domain, and non-linear features), followed by a machine learning classifier. In the second, raw EEG signals were used as an input for the deep learning approaches. Different types of mental and emotional stress have been examined using two datasets, SAM 40 and DEAP. The proposed experiments proved that performing enrollment in a relaxed or calm state and identification in a stressed state have a negative effect on the identification system's performance. The best achieved accuracy for the DEAP dataset was 99.67% in the calm state and 96.67% in the stressed state. For the SAM 40 dataset, the best accuracy was 99.67%, 93.33%, 92.5%, and 91.67% for the relaxed state and stress caused by identifying mirror images, the Stroop color-word test, and solving arithmetic operations, respectively.


Subject(s)
Electroencephalography , Stress, Psychological , Humans , Electroencephalography/methods , Stress, Psychological/physiopathology , Stress, Psychological/diagnosis , Male , Signal Processing, Computer-Assisted , Adult , Female , Emotions/physiology , Machine Learning , Young Adult , Deep Learning
7.
Sensors (Basel) ; 24(13)2024 Jun 29.
Article in English | MEDLINE | ID: mdl-39001001

ABSTRACT

Electroencephalography (EEG) remains pivotal in neuroscience for its non-invasive exploration of brain activity, yet traditional electrodes are plagued with artifacts and the application of conductive paste poses practical challenges. Tripolar concentric ring electrode (TCRE) sensors used for EEG (tEEG) attenuate artifacts automatically, improving the signal quality. Hydrogel tapes offer a promising alternative to conductive paste, providing mess-free application and reliable electrode-skin contact in locations without hair. Since the electrodes of the TCRE sensors are only 1.0 mm apart, the impedance of the skin-to-electrode impedance-matching medium is critical. This study evaluates four hydrogel tapes' efficacies in EEG electrode application, comparing impedance and alpha wave characteristics. Healthy adult participants underwent tEEG recordings using different tapes. The results highlight varying impedances and successful alpha wave detection despite increased tape-induced impedance. MATLAB's EEGLab facilitated signal processing. This study underscores hydrogel tapes' potential as a convenient and effective alternative to traditional paste, enriching tEEG research methodologies. Two of the conductive hydrogel tapes had significantly higher alpha wave power than the other tapes, but were never significantly lower.


Subject(s)
Electrodes , Electroencephalography , Hydrogels , Humans , Electroencephalography/methods , Hydrogels/chemistry , Adult , Male , Electric Conductivity , Female , Electric Impedance , Signal Processing, Computer-Assisted , Young Adult , Brain/physiology
8.
Sensors (Basel) ; 24(13)2024 Jun 29.
Article in English | MEDLINE | ID: mdl-39001013

ABSTRACT

Ischemic stroke is a type of brain dysfunction caused by pathological changes in the blood vessels of the brain which leads to brain tissue ischemia and hypoxia and ultimately results in cell necrosis. Without timely and effective treatment in the early time window, ischemic stroke can lead to long-term disability and even death. Therefore, rapid detection is crucial in patients with ischemic stroke. In this study, we developed a deep learning model based on fusion features extracted from electroencephalography (EEG) signals for the fast detection of ischemic stroke. Specifically, we recruited 20 ischemic stroke patients who underwent EEG examination during the acute phase of stroke and collected EEG signals from 19 adults with no history of stroke as a control group. Afterwards, we constructed correlation-weighted Phase Lag Index (cwPLI), a novel feature, to explore the synchronization information and functional connectivity between EEG channels. Moreover, the spatio-temporal information from functional connectivity and the nonlinear information from complexity were fused by combining the cwPLI matrix and Sample Entropy (SaEn) together to further improve the discriminative ability of the model. Finally, the novel MSE-VGG network was employed as a classifier to distinguish ischemic stroke from non-ischemic stroke data. Five-fold cross-validation experiments demonstrated that the proposed model possesses excellent performance, with accuracy, sensitivity, and specificity reaching 90.17%, 89.86%, and 90.44%, respectively. Experiments on time consumption verified that the proposed method is superior to other state-of-the-art examinations. This study contributes to the advancement of the rapid detection of ischemic stroke, shedding light on the untapped potential of EEG and demonstrating the efficacy of deep learning in ischemic stroke identification.


Subject(s)
Deep Learning , Electroencephalography , Ischemic Stroke , Humans , Electroencephalography/methods , Ischemic Stroke/physiopathology , Ischemic Stroke/diagnosis , Male , Female , Aged , Middle Aged , Brain Ischemia/physiopathology , Brain Ischemia/diagnosis , Signal Processing, Computer-Assisted , Stroke/physiopathology , Stroke/diagnosis
9.
Sensors (Basel) ; 24(13)2024 Jun 30.
Article in English | MEDLINE | ID: mdl-39001037

ABSTRACT

Drowsiness is a main factor for various costly defects, even fatal accidents in areas such as construction, transportation, industry and medicine, due to the lack of monitoring vigilance in the mentioned areas. The implementation of a drowsiness detection system can greatly help to reduce the defects and accident rates by alerting individuals when they enter a drowsy state. This research proposes an electroencephalography (EEG)-based approach for detecting drowsiness. EEG signals are passed through a preprocessing chain composed of artifact removal and segmentation to ensure accurate detection followed by different feature extraction methods to extract the different features related to drowsiness. This work explores the use of various machine learning algorithms such as Support Vector Machine (SVM), the K nearest neighbor (KNN), the Naive Bayes (NB), the Decision Tree (DT), and the Multilayer Perceptron (MLP) to analyze EEG signals sourced from the DROZY database, carefully labeled into two distinct states of alertness (awake and drowsy). Segmentation into 10 s intervals ensures precise detection, while a relevant feature selection layer enhances accuracy and generalizability. The proposed approach achieves high accuracy rates of 99.84% and 96.4% for intra (subject by subject) and inter (cross-subject) modes, respectively. SVM emerges as the most effective model for drowsiness detection in the intra mode, while MLP demonstrates superior accuracy in the inter mode. This research offers a promising avenue for implementing proactive drowsiness detection systems to enhance occupational safety across various industries.


Subject(s)
Electroencephalography , Sleep Stages , Support Vector Machine , Humans , Electroencephalography/methods , Sleep Stages/physiology , Algorithms , Electrodes , Signal Processing, Computer-Assisted , Bayes Theorem , Machine Learning
10.
Sensors (Basel) ; 24(13)2024 Jul 05.
Article in English | MEDLINE | ID: mdl-39001147

ABSTRACT

With the development of data mining technology, the analysis of event-related potential (ERP) data has evolved from statistical analysis of time-domain features to data-driven techniques based on supervised and unsupervised learning. However, there are still many challenges in understanding the relationship between ERP components and the representation of familiar and unfamiliar faces. To address this, this paper proposes a model based on Dynamic Multi-Scale Convolution for group recognition of familiar and unfamiliar faces. This approach uses generated weight masks for cross-subject familiar/unfamiliar face recognition using a multi-scale model. The model employs a variable-length filter generator to dynamically determine the optimal filter length for time-series samples, thereby capturing features at different time scales. Comparative experiments are conducted to evaluate the model's performance against SOTA models. The results demonstrate that our model achieves impressive outcomes, with a balanced accuracy rate of 93.20% and an F1 score of 88.54%, outperforming the methods used for comparison. The ERP data extracted from different time regions in the model can also provide data-driven technical support for research based on the representation of different ERP components.


Subject(s)
Evoked Potentials , Facial Recognition , Humans , Evoked Potentials/physiology , Facial Recognition/physiology , Electroencephalography/methods , Algorithms , Face/physiology
11.
Sensors (Basel) ; 24(13)2024 Jul 06.
Article in English | MEDLINE | ID: mdl-39001171

ABSTRACT

The driver in road hypnosis has not only some external characteristics, but also some internal characteristics. External features have obvious manifestations and can be directly observed. Internal features do not have obvious manifestations and cannot be directly observed. They need to be measured with specific instruments. Electroencephalography (EEG), as an internal feature of drivers, is the golden parameter for drivers' life identification. EEG is of great significance for the identification of road hypnosis. An identification method for road hypnosis based on human EEG data is proposed in this paper. EEG data on drivers in road hypnosis can be collected through vehicle driving experiments and virtual driving experiments. The collected data are preprocessed with the PSD (power spectral density) method, and EEG characteristics are extracted. The neural networks EEGNet, RNN, and LSTM are used to train the road hypnosis identification model. It is shown from the results that the model based on EEGNet has the best performance in terms of identification for road hypnosis, with an accuracy of 93.01%. The effectiveness and accuracy of the identification for road hypnosis are improved in this study. The essential characteristics for road hypnosis are also revealed. This is of great significance for improving the safety level of intelligent vehicles and reducing the number of traffic accidents caused by road hypnosis.


Subject(s)
Automobile Driving , Electroencephalography , Hypnosis , Neural Networks, Computer , Humans , Electroencephalography/methods , Hypnosis/methods , Accidents, Traffic
12.
J Vis Exp ; (208)2024 Jun 28.
Article in English | MEDLINE | ID: mdl-39007605

ABSTRACT

The meningeal lymphatic vessels (MLVs) play an important role in the removal of toxins from the brain. The development of innovative technologies for the stimulation of MLV functions is a promising direction in the progress of the treatment of various brain diseases associated with MLV abnormalities, including Alzheimer's and Parkinson's diseases, brain tumors, traumatic brain injuries, and intracranial hemorrhages. Sleep is a natural state when the brain's drainage processes are most active. Therefore, stimulation of the brain's drainage and MLVs during sleep may have the most pronounced therapeutic effects. However, such commercial technologies do not currently exist. This study presents a new portable technology of transcranial photobiomodulation (tPBM) under electroencephalographic (EEG) control of sleep designed to photo-stimulate removal of toxins (e.g., soluble amyloid beta (Aß)) from the brain of aged BALB/c mice with the ability to compare the therapeutic effectiveness of different optical resources. The technology can be used in the natural condition of a home cage without anesthesia, maintaining the motor activity of mice. These data open up new prospects for developing non-invasive and clinically promising photo-technologies for the correction of age-related changes in the MLV functions and brain's drainage processes and for effectively cleansing brain tissues from metabolites and toxins. This technology is intended both for preclinical studies of the functions of the sleeping brain and for developing clinically relevant treatments for sleep-related brain diseases.


Subject(s)
Brain , Electroencephalography , Mice, Inbred BALB C , Sleep , Animals , Mice , Brain/radiation effects , Electroencephalography/methods , Sleep/physiology , Sleep/radiation effects , Low-Level Light Therapy/methods , Amyloid beta-Peptides/metabolism , Lymphatic Vessels/radiation effects , Lymphatic Vessels/physiology
13.
Hum Brain Mapp ; 45(10): e26720, 2024 Jul 15.
Article in English | MEDLINE | ID: mdl-38994740

ABSTRACT

Electro/Magneto-EncephaloGraphy (EEG/MEG) source imaging (EMSI) of epileptic activity from deep generators is often challenging due to the higher sensitivity of EEG/MEG to superficial regions and to the spatial configuration of subcortical structures. We previously demonstrated the ability of the coherent Maximum Entropy on the Mean (cMEM) method to accurately localize the superficial cortical generators and their spatial extent. Here, we propose a depth-weighted adaptation of cMEM to localize deep generators more accurately. These methods were evaluated using realistic MEG/high-density EEG (HD-EEG) simulations of epileptic activity and actual MEG/HD-EEG recordings from patients with focal epilepsy. We incorporated depth-weighting within the MEM framework to compensate for its preference for superficial generators. We also included a mesh of both hippocampi, as an additional deep structure in the source model. We generated 5400 realistic simulations of interictal epileptic discharges for MEG and HD-EEG involving a wide range of spatial extents and signal-to-noise ratio (SNR) levels, before investigating EMSI on clinical HD-EEG in 16 patients and MEG in 14 patients. Clinical interictal epileptic discharges were marked by visual inspection. We applied three EMSI methods: cMEM, depth-weighted cMEM and depth-weighted minimum norm estimate (MNE). The ground truth was defined as the true simulated generator or as a drawn region based on clinical information available for patients. For deep sources, depth-weighted cMEM improved the localization when compared to cMEM and depth-weighted MNE, whereas depth-weighted cMEM did not deteriorate localization accuracy for superficial regions. For patients' data, we observed improvement in localization for deep sources, especially for the patients with mesial temporal epilepsy, for which cMEM failed to reconstruct the initial generator in the hippocampus. Depth weighting was more crucial for MEG (gradiometers) than for HD-EEG. Similar findings were found when considering depth weighting for the wavelet extension of MEM. In conclusion, depth-weighted cMEM improved the localization of deep sources without or with minimal deterioration of the localization of the superficial sources. This was demonstrated using extensive simulations with MEG and HD-EEG and clinical MEG and HD-EEG for epilepsy patients.


Subject(s)
Electroencephalography , Entropy , Magnetoencephalography , Humans , Magnetoencephalography/methods , Electroencephalography/methods , Adult , Female , Male , Computer Simulation , Young Adult , Epilepsy/physiopathology , Epilepsy/diagnostic imaging , Middle Aged , Brain Mapping/methods , Brain/diagnostic imaging , Brain/physiopathology , Hippocampus/diagnostic imaging , Hippocampus/physiopathology , Models, Neurological
14.
J Vis ; 24(7): 8, 2024 Jul 02.
Article in English | MEDLINE | ID: mdl-38990066

ABSTRACT

In the present study, we used Hierarchical Frequency Tagging (Gordon et al., 2017) to investigate in electroencephalography how different levels of the neural processing hierarchy interact with category-selective attention during visual object recognition. We constructed stimulus sequences of cyclic wavelet scrambled face and house stimuli at two different frequencies (f1 = 0.8 Hz and f2 = 1 Hz). For each trial, two stimulus sequences of different frequencies were superimposed and additionally augmented by a sinusoidal contrast modulation with f3 = 12.5 Hz. This allowed us to simultaneously assess higher level processing using semantic wavelet-induced frequency-tagging (SWIFT) and processing in earlier visual levels using steady-state visually evoked potentials (SSVEPs), along with their intermodulation (IM) components. To investigate the category specificity of the SWIFT signal, we manipulated the category congruence between target and distractor by superimposing two sequences containing stimuli from the same or different object categories. Participants attended to one stimulus (target) and ignored the other (distractor). Our results showed successful tagging of different levels of the cortical hierarchy. Using linear mixed-effects modeling, we detected different attentional modulation effects on lower versus higher processing levels. SWIFT and IM components were substantially increased for target versus distractor stimuli, reflecting attentional selection of the target stimuli. In addition, distractor stimuli from the same category as targets elicited stronger SWIFT signals than distractor stimuli from a different category indicating category-selective attention. In contrast, for IM components, this category-selective attention effect was largely absent, indicating that IM components probably reflect more stimulus-specific processing.


Subject(s)
Attention , Electroencephalography , Evoked Potentials, Visual , Pattern Recognition, Visual , Photic Stimulation , Humans , Attention/physiology , Male , Female , Electroencephalography/methods , Evoked Potentials, Visual/physiology , Young Adult , Adult , Photic Stimulation/methods , Pattern Recognition, Visual/physiology , Reaction Time/physiology
15.
J Neural Eng ; 21(4)2024 Jul 16.
Article in English | MEDLINE | ID: mdl-38959877

ABSTRACT

Objective. Traditionally known for its involvement in emotional processing, the amygdala's involvement in motor control remains relatively unexplored, with sparse investigations into the neural mechanisms governing amygdaloid motor movement and inhibition. This study aimed to characterize the amygdaloid beta-band (13-30 Hz) power between 'Go' and 'No-go' trials of an arm-reaching task.Approach. Ten participants with drug-resistant epilepsy implanted with stereoelectroencephalographic (SEEG) electrodes in the amygdala were enrolled in this study. SEEG data was recorded throughout discrete phases of a direct reach Go/No-go task, during which participants reached a touchscreen monitor or withheld movement based on a colored cue. Multitaper power analysis along with Wilcoxon signed-rank and Yates-correctedZtests were used to assess significant modulations of beta power between the Response and fixation (baseline) phases in the 'Go' and 'No-go' conditions.Main results. In the 'Go' condition, nine out of the ten participants showed a significant decrease in relative beta-band power during the Response phase (p⩽ 0.0499). In the 'No-go' condition, eight out of the ten participants presented a statistically significant increase in relative beta-band power during the response phase (p⩽ 0.0494). Four out of the eight participants with electrodes in the contralateral hemisphere and seven out of the eight participants with electrodes in the ipsilateral hemisphere presented significant modulation in beta-band power in both the 'Go' and 'No-go' conditions. At the group level, no significant differences were found between the contralateral and ipsilateral sides or between genders.Significance.This study reports beta-band power modulation in the human amygdala during voluntary movement in the setting of motor execution and inhibition. This finding supplements prior research in various brain regions associating beta-band power with motor control. The distinct beta-power modulation observed between these response conditions suggests involvement of amygdaloid oscillations in differentiating between motor inhibition and execution.


Subject(s)
Amygdala , Arm , Beta Rhythm , Psychomotor Performance , Humans , Amygdala/physiology , Male , Female , Adult , Beta Rhythm/physiology , Psychomotor Performance/physiology , Arm/physiology , Young Adult , Movement/physiology , Middle Aged , Drug Resistant Epilepsy/physiopathology , Electroencephalography/methods
16.
J Neural Eng ; 21(4)2024 Jul 16.
Article in English | MEDLINE | ID: mdl-38968936

ABSTRACT

Objective.Domain adaptation has been recognized as a potent solution to the challenge of limited training data for electroencephalography (EEG) classification tasks. Existing studies primarily focus on homogeneous environments, however, the heterogeneous properties of EEG data arising from device diversity cannot be overlooked. This motivates the development of heterogeneous domain adaptation methods that can fully exploit the knowledge from an auxiliary heterogeneous domain for EEG classification.Approach.In this article, we propose a novel model named informative representation fusion (IRF) to tackle the problem of unsupervised heterogeneous domain adaptation in the context of EEG data. In IRF, we consider different perspectives of data, i.e. independent identically distributed (iid) and non-iid, to learn different representations. Specifically, from the non-iid perspective, IRF models high-order correlations among data by hypergraphs and develops hypergraph encoders to obtain data representations of each domain. From the non-iid perspective, by applying multi-layer perceptron networks to the source and target domain data, we achieve another type of representation for both domains. Subsequently, an attention mechanism is used to fuse these two types of representations to yield informative features. To learn transferable representations, the maximum mean discrepancy is utilized to align the distributions of the source and target domains based on the fused features.Main results.Experimental results on several real-world datasets demonstrate the effectiveness of the proposed model.Significance.This article handles an EEG classification situation where the source and target EEG data lie in different spaces, and what's more, under an unsupervised learning setting. This situation is practical in the real world but barely studied in the literature. The proposed model achieves high classification accuracy, and this study is important for the commercial applications of EEG-based BCIs.


Subject(s)
Electroencephalography , Electroencephalography/methods , Electroencephalography/classification , Humans , Unsupervised Machine Learning , Algorithms , Neural Networks, Computer
17.
Neurology ; 103(3): e209608, 2024 Aug 13.
Article in English | MEDLINE | ID: mdl-38991197

ABSTRACT

OBJECTIVES: Rhythmic and periodic patterns (RPPs) on EEG in patients in a coma after cardiac arrest are associated with a poor neurologic outcome. We characterize RPPs using qEEG in relation to outcomes. METHODS: Post hoc analysis was conducted on 172 patients in a coma after cardiac arrest from the TELSTAR trial, all with RPPs. Quantitative EEG included corrected background continuity index (BCI*), relative discharge power (RDP), discharge frequency, and shape similarity. Neurologic outcomes at 3 months after arrest were categorized as poor (CPC = 3-5) or good (CPC = 1-2). RESULTS: A total of 16 patients (9.3%) had a good outcome. Patients with good outcomes showed later RPP onset (28.5 vs 20.1 hours after arrest, p < 0.05) and higher background continuity at RPP onset (BCI* = 0.83 vs BCI* = 0.59, p < 0.05). BCI* <0.45 at RPP onset, maximum BCI* <0.76, RDP >0.47, or shape similarity >0.75 were consistently associated with poor outcomes, identifying 36%, 22%, 40%, or 24% of patients with poor outcomes, respectively. In patients meeting both BCI* >0.44 at RPP onset and BCI* >0.75 within 72 hours, the probability of good outcomes doubled to 18%. DISCUSSION: Sufficient EEG background continuity before and during RPPs is crucial for meaningful recovery. Background continuity, discharge power, and shape similarity can help select patients with relevant chances of recovery and may guide treatment. TRIAL REGISTRATION INFORMATION: February 4, 2014, ClinicalTrial.gov, NCT02056236.


Subject(s)
Coma , Electroencephalography , Heart Arrest , Humans , Coma/physiopathology , Coma/etiology , Electroencephalography/methods , Male , Female , Heart Arrest/complications , Heart Arrest/physiopathology , Middle Aged , Aged
18.
J Neural Eng ; 21(4)2024 Jul 12.
Article in English | MEDLINE | ID: mdl-38959876

ABSTRACT

Objective.Patients suffering from heavy paralysis or Locked-in-Syndrome can regain communication using a Brain-Computer Interface (BCI). Visual event-related potential (ERP) based BCI paradigms exploit visuospatial attention (VSA) to targets laid out on a screen. However, performance drops if the user does not direct their eye gaze at the intended target, harming the utility of this class of BCIs for patients suffering from eye motor deficits. We aim to create an ERP decoder that is less dependent on eye gaze.Approach.ERP component latency jitter plays a role in covert visuospatial attention (VSA) decoding. We introduce a novel decoder which compensates for these latency effects, termed Woody Classifier-based Latency Estimation (WCBLE). We carried out a BCI experiment recording ERP data in overt and covert visuospatial attention (VSA), and introduce a novel special case of covert VSA termed split VSA, simulating the experience of patients with severely impaired eye motor control. We evaluate WCBLE on this dataset and the BNCI2014-009 dataset, within and across VSA conditions to study the dependency on eye gaze and the variation thereof during the experiment.Main results.WCBLE outperforms state-of-the-art methods in the VSA conditions of interest in gaze-independent decoding, without reducing overt VSA performance. Results from across-condition evaluation show that WCBLE is more robust to varying VSA conditions throughout a BCI operation session.Significance. Together, these results point towards a pathway to achieving gaze independence through suited ERP decoding. Our proposed gaze-independent solution enhances decoding performance in those cases where performing overt VSA is not possible.


Subject(s)
Attention , Brain-Computer Interfaces , Electroencephalography , Fixation, Ocular , Humans , Male , Female , Adult , Fixation, Ocular/physiology , Attention/physiology , Electroencephalography/methods , Young Adult , Photic Stimulation/methods , Reaction Time/physiology , Evoked Potentials, Visual/physiology
19.
Article in English | MEDLINE | ID: mdl-38976469

ABSTRACT

The steady-state visual evoked potential (SSVEP) has become one of the most prominent BCI paradigms with high information transfer rate, and has been widely applied in rehabilitation and assistive applications. This paper proposes a least-square (LS) unified framework to summarize the correlation analysis (CA)-based SSVEP spatial filtering methods from a machine learning perspective. Within this framework, the commonalities and differences between various spatial filtering methods appear apparent, the interpretation of computational factors becomes intuitive, and spatial filters can be determined by solving a generalized optimization problem with non-linear and regularization items. Moreover, the proposed LS framework provides the foundation of utilizing the knowledge behind these spatial filtering methods in further classification/regression model designs. Through a comparative analysis of existing representative spatial filtering methods, recommendations are made for the superior and robust design strategies. These recommended strategies are further integrated to fill the research gaps and demonstrate the ability of the proposed LS framework to promote algorithmic improvements, resulting in five new spatial filtering methods. This study could offer significant insights in understanding the relationships between various design strategies in the spatial filtering methods from the machine learning perspective, and would also contribute to the development of the SSVEP recognition methods with high performance.


Subject(s)
Algorithms , Brain-Computer Interfaces , Electroencephalography , Evoked Potentials, Visual , Machine Learning , Humans , Evoked Potentials, Visual/physiology , Electroencephalography/methods , Least-Squares Analysis , Nonlinear Dynamics , Reproducibility of Results , Male
20.
Article in English | MEDLINE | ID: mdl-38976470

ABSTRACT

The process of reconstructing underlying cortical and subcortical electrical activities from Electroencephalography (EEG) or Magnetoencephalography (MEG) recordings is called Electrophysiological Source Imaging (ESI). Given the complementarity between EEG and MEG in measuring radial and tangential cortical sources, combined EEG/MEG is considered beneficial in improving the reconstruction performance of ESI algorithms. Traditional algorithms mainly emphasize incorporating predesigned neurophysiological priors to solve the ESI problem. Deep learning frameworks aim to directly learn the mapping from scalp EEG/MEG measurements to the underlying brain source activities in a data-driven manner, demonstrating superior performance compared to traditional methods. However, most of the existing deep learning approaches for the ESI problem are performed on a single modality of EEG or MEG, meaning the complementarity of these two modalities has not been fully utilized. How to fuse the EEG and MEG in a more principled manner under the deep learning paradigm remains a challenging question. This study develops a Multi-Modal Deep Fusion (MMDF) framework using Attention Neural Networks (ANN) to fully leverage the complementary information between EEG and MEG for solving the ESI inverse problem, which is termed as MMDF-ANN. Specifically, our proposed brain source imaging approach consists of four phases, including feature extraction, weight generation, deep feature fusion, and source mapping. Our experimental results on both synthetic dataset and real dataset demonstrated that using a fusion of EEG and MEG can significantly improve the source localization accuracy compared to using a single-modality of EEG or MEG. Compared to the benchmark algorithms, MMDF-ANN demonstrated good stability when reconstructing sources with extended activation areas and situations of EEG/MEG measurements with a low signal-to-noise ratio.


Subject(s)
Algorithms , Deep Learning , Electroencephalography , Magnetoencephalography , Neural Networks, Computer , Magnetoencephalography/methods , Humans , Electroencephalography/methods , Adult , Male , Multimodal Imaging/methods , Female , Brain/physiology , Brain/diagnostic imaging , Young Adult
SELECTION OF CITATIONS
SEARCH DETAIL
...