Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 5.003
Filter
1.
J Neural Eng ; 21(3)2024 Jun 06.
Article in English | MEDLINE | ID: mdl-38842111

ABSTRACT

Objective. Multi-channel electroencephalogram (EEG) technology in brain-computer interface (BCI) research offers the advantage of enhanced spatial resolution and system performance. However, this also implies that more time is needed in the data processing stage, which is not conducive to the rapid response of BCI. Hence, it is a necessary and challenging task to reduce the number of EEG channels while maintaining decoding effectiveness.Approach. In this paper, we propose a local optimization method based on the Fisher score for within-subject EEG channel selection. Initially, we extract the common spatial pattern characteristics of EEG signals in different bands, calculate Fisher scores for each channel based on these characteristics, and rank them accordingly. Subsequently, we employ a local optimization method to finalize the channel selection.Main results. On the BCI Competition IV Dataset IIa, our method selects an average of 11 channels across four bands, achieving an average accuracy of 79.37%. This represents a 6.52% improvement compared to using the full set of 22 channels. On our self-collected dataset, our method similarly achieves a significant improvement of 24.20% with less than half of the channels, resulting in an average accuracy of 76.95%.Significance. This research explores the importance of channel combinations in channel selection tasks and reveals that appropriately combining channels can further enhance the quality of channel selection. The results indicate that the model selected a small number of channels with higher accuracy in two-class motor imagery EEG classification tasks. Additionally, it improves the portability of BCI systems through channel selection and combinations, offering the potential for the development of portable BCI systems.


Subject(s)
Brain-Computer Interfaces , Electroencephalography , Imagination , Electroencephalography/methods , Humans , Imagination/physiology , Algorithms , Movement/physiology
2.
Biomed Phys Eng Express ; 10(4)2024 Jun 04.
Article in English | MEDLINE | ID: mdl-38781932

ABSTRACT

Deep learning models have demonstrated remarkable performance in the classification of motor imagery BCI systems. However, these models exhibit sensitivity to challenging trials, often called hard trials, leading to performance degradation. In this paper, we address this issue by proposing two novel methods for identifying and mitigating the impact of hard trials on model performance. The first method leverages model prediction scores to discern hard trials. The second approach employs a quantitative explainable artificial intelligence (XAI) approach, enabling a more transparent and interpretable means of hard trial identification. The identified hard trials are removed from the entire motor imagery training and validation dataset, and the deep learning model is further re-trained using the dataset without hard trials. To evaluate the efficacy of these proposed methods, experiments were conducted on the Open BMI dataset. The results for hold-out analysis show that the proposed quantitative XAI- based hard trial removal method has statistically improved the average classification accuracy of the baseline deep CNN model from 63.77% to 68.70%, withp-value =7.66e-11for the subject-specific MI classification. Additionally, analyzing the scalp map representing the average relevance scores of correctly classified trials compared to a misclassified trial provides a deeper insight into identifying hard trials. The results indicate that the proposed quantitaive-based XAI approach outperformes the prediction-score-based approach in hard trial identification.


Subject(s)
Algorithms , Brain-Computer Interfaces , Deep Learning , Electroencephalography , Humans , Electroencephalography/methods , Imagination , Artificial Intelligence , Neural Networks, Computer
3.
Article in English | MEDLINE | ID: mdl-38805337

ABSTRACT

Bimanual coordination is important for developing a natural motor brain-computer interface (BCI) from electroencephalogram (EEG) signals, covering the aspects of bilateral arm training for rehabilitation, bimanual coordination for daily-life assistance, and also improving the multidimensional control of BCIs. For the same task targets of both hands, simultaneous and sequential bimanual movements are two different bimanual coordination manners. Planning and performing motor sequences are the fundamental abilities of humans, and it is more natural to execute sequential movements compared to simultaneous movements in many complex tasks. However, to date, for these two different manners in which two hands coordinated to reach the same task targets, the differences in the neural correlate and also the feasibility of movement discrimination have not been explored. In this study, we aimed to investigate these two issues based on a bimanual reaching task for the first time. Finally, neural correlates in the view of the movement-related cortical potentials, event-related oscillations, and source imaging showed unique neural encoding patterns of sequential movements. Besides, for the same task targets of both hands, the simultaneous and sequential bimanual movements were successfully discriminated in both pre-movement and movement execution periods. This study revealed the neural encoding patterns of sequential bimanual movements and presented its values in developing a more natural and good-performance motor BCI.


Subject(s)
Brain-Computer Interfaces , Electroencephalography , Hand , Movement , Psychomotor Performance , Humans , Electroencephalography/methods , Male , Movement/physiology , Female , Adult , Hand/physiology , Young Adult , Psychomotor Performance/physiology , Algorithms , Motor Cortex/physiology , Healthy Volunteers
4.
Article in English | MEDLINE | ID: mdl-38781061

ABSTRACT

Steady-state visual-evoked potential (SSVEP)-based brain-computer interfaces (BCIs) offer a non-invasive means of communication through high-speed speller systems. However, their efficiency is highly dependent on individual training data acquired during time-consuming calibration sessions. To address the challenge of data insufficiency in SSVEP-based BCIs, we introduce SSVEP-DAN, the first dedicated neural network model designed to align SSVEP data across different domains, encompassing various sessions, subjects, or devices. Our experimental results demonstrate the ability of SSVEP-DAN to transform existing source SSVEP data into supplementary calibration data. This results in a significant improvement in SSVEP decoding accuracy while reducing the calibration time. We envision SSVEP-DAN playing a crucial role in future applications of high-performance SSVEP-based BCIs. The source code for this work is available at: https://github.com/CECNL/SSVEP-DAN.


Subject(s)
Algorithms , Brain-Computer Interfaces , Electroencephalography , Evoked Potentials, Visual , Humans , Evoked Potentials, Visual/physiology , Male , Adult , Female , Neural Networks, Computer , Young Adult , Calibration , Reproducibility of Results
5.
J Neuroeng Rehabil ; 21(1): 91, 2024 May 29.
Article in English | MEDLINE | ID: mdl-38812014

ABSTRACT

BACKGROUND: The most challenging aspect of rehabilitation is the repurposing of residual functional plasticity in stroke patients. To achieve this, numerous plasticity-based clinical rehabilitation programs have been developed. This study aimed to investigate the effects of motor imagery (MI)-based brain-computer interface (BCI) rehabilitation programs on upper extremity hand function in patients with chronic hemiplegia. DESIGN: A 2010 Consolidated Standards for Test Reports (CONSORT)-compliant randomized controlled trial. METHODS: Forty-six eligible stroke patients with upper limb motor dysfunction participated in the study, six of whom dropped out. The patients were randomly divided into a BCI group and a control group. The BCI group received BCI therapy and conventional rehabilitation therapy, while the control group received conventional rehabilitation only. The Fugl-Meyer Assessment of the Upper Extremity (FMA-UE) score was used as the primary outcome to evaluate upper extremity motor function. Additionally, functional magnetic resonance imaging (fMRI) scans were performed on all patients before and after treatment, in both the resting and task states. We measured the amplitude of low-frequency fluctuation (ALFF), regional homogeneity (ReHo), z conversion of ALFF (zALFF), and z conversion of ReHo (ReHo) in the resting state. The task state was divided into four tasks: left-hand grasping, right-hand grasping, imagining left-hand grasping, and imagining right-hand grasping. Finally, meaningful differences were assessed using correlation analysis of the clinical assessments and functional measures. RESULTS: A total of 40 patients completed the study, 20 in the BCI group and 20 in the control group. Task-related blood-oxygen-level-dependent (BOLD) analysis showed that when performing the motor grasping task with the affected hand, the BCI group exhibited significant activation in the ipsilateral middle cingulate gyrus, precuneus, inferior parietal gyrus, postcentral gyrus, middle frontal gyrus, superior temporal gyrus, and contralateral middle cingulate gyrus. When imagining a grasping task with the affected hand, the BCI group exhibited greater activation in the ipsilateral superior frontal gyrus (medial) and middle frontal gyrus after treatment. However, the activation of the contralateral superior frontal gyrus decreased in the BCI group relative to the control group. Resting-state fMRI revealed increased zALFF in multiple cerebral regions, including the contralateral precentral gyrus and calcarine and the ipsilateral middle occipital gyrus and cuneus, and decreased zALFF in the ipsilateral superior temporal gyrus in the BCI group relative to the control group. Increased zReHo in the ipsilateral cuneus and contralateral calcarine and decreased zReHo in the contralateral middle temporal gyrus, temporal pole, and superior temporal gyrus were observed post-intervention. According to the subsequent correlation analysis, the increase in the FMA-UE score showed a positive correlation with the mean zALFF of the contralateral precentral gyrus (r = 0.425, P < 0.05), the mean zReHo of the right cuneus (r = 0.399, P < 0.05). CONCLUSION: In conclusion, BCI therapy is effective and safe for arm rehabilitation after severe poststroke hemiparesis. The correlation of the zALFF of the contralateral precentral gyrus and the zReHo of the ipsilateral cuneus with motor improvements suggested that these values can be used as prognostic measures for BCI-based stroke rehabilitation. We found that motor function was related to visual and spatial processing, suggesting potential avenues for refining treatment strategies for stroke patients. TRIAL REGISTRATION: The trial is registered in the Chinese Clinical Trial Registry (number ChiCTR2000034848, registered July 21, 2020).


Subject(s)
Brain-Computer Interfaces , Imagery, Psychotherapy , Magnetic Resonance Imaging , Stroke Rehabilitation , Stroke , Upper Extremity , Humans , Male , Stroke Rehabilitation/methods , Female , Middle Aged , Upper Extremity/physiopathology , Imagery, Psychotherapy/methods , Stroke/physiopathology , Stroke/complications , Aged , Adult , Imagination/physiology , Cerebral Cortex/diagnostic imaging , Cerebral Cortex/physiopathology
6.
Sensors (Basel) ; 24(10)2024 May 09.
Article in English | MEDLINE | ID: mdl-38793855

ABSTRACT

Recently, due to physical aging, diseases, accidents, and other factors, the population with lower limb disabilities has been increasing, and there is consequently a growing demand for wheelchair products. Modern product design tends to be more intelligent and multi-functional than in the past, with the popularization of intelligent concepts. This supports the design of a new, fully functional, intelligent wheelchair that can assist people with lower limb disabilities in their day-to-day life. Based on the UCD (user-centered design) concept, this study focused on the needs of people with lower limb disabilities. Accordingly, the demand for different functions of intelligent wheelchair products was studied through a questionnaire survey, interview survey, literature review, expert consultation, etc., and the function and appearance of the intelligent wheelchair were then defined. A brain-machine interface system was developed for controlling the motion of the intelligent wheelchair, catering to the needs of disabled individuals. Furthermore, ergonomics theory was used as a guide to determine the size of the intelligent wheelchair seat, and eventually, a new intelligent wheelchair with the features of climbing stairs, posture adjustment, seat elevation, easy interaction, etc., was developed. This paper provides a reference for the design upgrade of the subsequently developed intelligent wheelchair products.


Subject(s)
Brain-Computer Interfaces , Feasibility Studies , Wheelchairs , Humans , Disabled Persons , Equipment Design , Ergonomics/methods , User-Centered Design , Surveys and Questionnaires
7.
Sensors (Basel) ; 24(10)2024 May 10.
Article in English | MEDLINE | ID: mdl-38793895

ABSTRACT

Brain-computer interface (BCI) systems include signal acquisition, preprocessing, feature extraction, classification, and an application phase. In fNIRS-BCI systems, deep learning (DL) algorithms play a crucial role in enhancing accuracy. Unlike traditional machine learning (ML) classifiers, DL algorithms eliminate the need for manual feature extraction. DL neural networks automatically extract hidden patterns/features within a dataset to classify the data. In this study, a hand-gripping (closing and opening) two-class motor activity dataset from twenty healthy participants is acquired, and an integrated contextual gate network (ICGN) algorithm (proposed) is applied to that dataset to enhance the classification accuracy. The proposed algorithm extracts the features from the filtered data and generates the patterns based on the information from the previous cells within the network. Accordingly, classification is performed based on the similar generated patterns within the dataset. The accuracy of the proposed algorithm is compared with the long short-term memory (LSTM) and bidirectional long short-term memory (Bi-LSTM). The proposed ICGN algorithm yielded a classification accuracy of 91.23 ± 1.60%, which is significantly (p < 0.025) higher than the 84.89 ± 3.91 and 88.82 ± 1.96 achieved by LSTM and Bi-LSTM, respectively. An open access, three-class (right- and left-hand finger tapping and dominant foot tapping) dataset of 30 subjects is used to validate the proposed algorithm. The results show that ICGN can be efficiently used for the classification of two- and three-class problems in fNIRS-based BCI applications.


Subject(s)
Algorithms , Brain-Computer Interfaces , Deep Learning , Neural Networks, Computer , Spectroscopy, Near-Infrared , Humans , Spectroscopy, Near-Infrared/methods , Male , Adult , Female , Young Adult , Brain/physiology , Brain/diagnostic imaging
8.
Sensors (Basel) ; 24(10)2024 May 16.
Article in English | MEDLINE | ID: mdl-38794022

ABSTRACT

The widely adopted paradigm in brain-computer interfaces (BCIs) involves motor imagery (MI), enabling improved communication between humans and machines. EEG signals derived from MI present several challenges due to their inherent characteristics, which lead to a complex process of classifying and finding the potential tasks of a specific participant. Another issue is that BCI systems can result in noisy data and redundant channels, which in turn can lead to increased equipment and computational costs. To address these problems, the optimal channel selection of a multiclass MI classification based on a Fusion convolutional neural network with Attention blocks (FCNNA) is proposed. In this study, we developed a CNN model consisting of layers of convolutional blocks with multiple spatial and temporal filters. These filters are designed specifically to capture the distribution and relationships of signal features across different electrode locations, as well as to analyze the evolution of these features over time. Following these layers, a Convolutional Block Attention Module (CBAM) is used to, further, enhance EEG signal feature extraction. In the process of channel selection, the genetic algorithm is used to select the optimal set of channels using a new technique to deliver fixed as well as variable channels for all participants. The proposed methodology is validated showing 6.41% improvement in multiclass classification compared to most baseline models. Notably, we achieved the highest results of 93.09% for binary classes involving left-hand and right-hand movements. In addition, the cross-subject strategy for multiclass classification yielded an impressive accuracy of 68.87%. Following channel selection, multiclass classification accuracy was enhanced, reaching 84.53%. Overall, our experiments illustrated the efficiency of the proposed EEG MI model in both channel selection and classification, showing superior results with either a full channel set or a reduced number of channels.


Subject(s)
Algorithms , Brain-Computer Interfaces , Electroencephalography , Neural Networks, Computer , Humans , Electroencephalography/methods , Signal Processing, Computer-Assisted , Imagination/physiology , Attention/physiology
9.
J Vis Exp ; (207)2024 May 10.
Article in English | MEDLINE | ID: mdl-38801273

ABSTRACT

This study introduces an innovative framework for neurological rehabilitation by integrating brain-computer interfaces (BCI) and virtual reality (VR) technologies with the customization of three-dimensional (3D) avatars. Traditional approaches to rehabilitation often fail to fully engage patients, primarily due to their inability to provide a deeply immersive and interactive experience. This research endeavors to fill this gap by utilizing motor imagery (MI) techniques, where participants visualize physical movements without actual execution. This method capitalizes on the brain's neural mechanisms, activating areas involved in movement execution when imagining movements, thereby facilitating the recovery process. The integration of VR's immersive capabilities with the precision of electroencephalography (EEG) to capture and interpret brain activity associated with imagined movements forms the core of this system. Digital Twins in the form of personalized 3D avatars are employed to significantly enhance the sense of immersion within the virtual environment. This heightened sense of embodiment is crucial for effective rehabilitation, aiming to bolster the connection between the patient and their virtual counterpart. By doing so, the system not only aims to improve motor imagery performance but also seeks to provide a more engaging and efficacious rehabilitation experience. Through the real-time application of BCI, the system allows for the direct translation of imagined movements into virtual actions performed by the 3D avatar, offering immediate feedback to the user. This feedback loop is essential for reinforcing the neural pathways involved in motor control and recovery. The ultimate goal of the developed system is to significantly enhance the effectiveness of motor imagery exercises by making them more interactive and responsive to the user's cognitive processes, thereby paving a new path in the field of neurological rehabilitation.


Subject(s)
Brain-Computer Interfaces , Electroencephalography , Imagination , Virtual Reality , Humans , Imagination/physiology , Electroencephalography/methods , Adult , Neurological Rehabilitation/methods
10.
J Neural Eng ; 21(3)2024 May 30.
Article in English | MEDLINE | ID: mdl-38812288

ABSTRACT

Objective. Magnetoencephalography (MEG) shares a comparable time resolution with electroencephalography. However, MEG excels in spatial resolution, enabling it to capture even the subtlest and weakest brain signals for brain-computer interfaces (BCIs). Leveraging MEG's capabilities, specifically with optically pumped magnetometers (OPM-MEG), proves to be a promising avenue for advancing MEG-BCIs, owing to its exceptional sensitivity and portability. This study harnesses the power of high-frequency steady-state visual evoked fields (SSVEFs) to build an MEG-BCI system that is flickering-imperceptible, user-friendly, and highly accurate.Approach.We have constructed a nine-command BCI that operates on high-frequency SSVEF (58-62 Hz with a 0.5 Hz interval) stimulation. We achieved this by placing the light source inside and outside the magnetic shielding room, ensuring compliance with non-magnetic and visual stimulus presentation requirements. Five participants took part in offline experiments, during which we collected six-channel multi-dimensional MEG signals along both the vertical (Z-axis) and tangential (Y-axis) components. Our approach leveraged the ensemble task-related component analysis algorithm for SSVEF identification and system performance evaluation.Main Results.The offline average accuracy of our proposed system reached an impressive 92.98% when considering multi-dimensional conjoint analysis using data from both theZandYaxes. Our method achieved a theoretical average information transfer rate (ITR) of 58.36 bits min-1with a data length of 0.7 s, and the highest individual ITR reached an impressive 63.75 bits min-1.Significance.This study marks the first exploration of high-frequency SSVEF-BCI based on OPM-MEG. These results underscore the potential and feasibility of MEG in detecting subtle brain signals, offering both theoretical insights and practical value in advancing the development and application of MEG in BCI systems.


Subject(s)
Brain-Computer Interfaces , Evoked Potentials, Visual , Magnetoencephalography , Photic Stimulation , Humans , Magnetoencephalography/methods , Evoked Potentials, Visual/physiology , Adult , Male , Female , Photic Stimulation/methods , Young Adult , Visual Cortex/physiology
11.
Comput Methods Programs Biomed ; 251: 108213, 2024 Jun.
Article in English | MEDLINE | ID: mdl-38744056

ABSTRACT

BACKGROUND AND OBJECTIVE: Brain-Computer Interface (BCI) technology has recently been advancing rapidly, bringing significant hope for improving human health and quality of life. Decoding and visualizing visually evoked electroencephalography (EEG) signals into corresponding images plays a crucial role in the practical application of BCI technology. The recent emergence of diffusion models provides a good modeling basis for this work. However, the existing diffusion models still have great challenges in generating high-quality images from EEG, due to the low signal-to-noise ratio and strong randomness of EEG signals. The purpose of this study is to address the above-mentioned challenges by proposing a framework named NeuroDM that can decode human brain responses to visual stimuli from EEG-recorded brain activity. METHODS: In NeuroDM, an EEG-Visual-Transformer (EV-Transformer) is used to extract the visual-related features with high classification accuracy from EEG signals, then an EEG-Guided Diffusion Model (EG-DM) is employed to synthesize high-quality images from the EEG visual-related features. RESULTS: We conducted experiments on two EEG datasets (one is a forty-class dataset, and the other is a four-class dataset). In the task of EEG decoding, we achieved average accuracies of 99.80% and 92.07% on two datasets, respectively. In the task of EEG visualization, the Inception Score of the images generated by NeuroDM reached 15.04 and 8.67, respectively. All the above results outperform existing methods. CONCLUSIONS: The experimental results on two EEG datasets demonstrate the effectiveness of the NeuroDM framework, achieving state-of-the-art performance in terms of classification accuracy and image quality. Furthermore, our NeuroDM exhibits strong generalization capabilities and the ability to generate diverse images.


Subject(s)
Brain-Computer Interfaces , Brain , Electroencephalography , Humans , Brain/diagnostic imaging , Brain/physiology , Algorithms , Signal-To-Noise Ratio , Signal Processing, Computer-Assisted , Evoked Potentials, Visual/physiology
12.
Comput Methods Programs Biomed ; 251: 108208, 2024 Jun.
Article in English | MEDLINE | ID: mdl-38754326

ABSTRACT

BACKGROUND AND OBJECTIVE: Intracortical brain-computer interfaces (iBCIs) aim to help paralyzed individuals restore their motor functions by decoding neural activity into intended movement. However, changes in neural recording conditions hinder the decoding performance of iBCIs, mainly because the neural-to-kinematic mappings shift. Conventional approaches involve either training the neural decoders using large datasets before deploying the iBCI or conducting frequent calibrations during its operation. However, collecting data for extended periods can cause user fatigue, negatively impacting the quality and consistency of neural signals. Furthermore, frequent calibration imposes a substantial computational load. METHODS: This study proposes a novel approach to increase iBCIs' robustness against changing recording conditions. The approach uses three neural augmentation operators to generate augmented neural activity that mimics common recording conditions. Then, contrastive learning is used to learn latent factors by maximizing the similarity between the augmented neural activities. The learned factors are expected to remain stable despite varying recording conditions and maintain a consistent correlation with the intended movement. RESULTS: Experimental results demonstrate that the proposed iBCI outperformed the state-of-the-art iBCIs and was robust to changing recording conditions across days for long-term use on one publicly available nonhuman primate dataset. It achieved satisfactory offline decoding performance, even when a large training dataset was unavailable. CONCLUSIONS: This study paves the way for reducing the need for frequent calibration of iBCIs and collecting a large amount of annotated training data. Potential future works aim to improve offline decoding performance with an ultra-small training dataset and improve the iBCIs' robustness to severely disabled electrodes.


Subject(s)
Brain-Computer Interfaces , Animals , Algorithms , Calibration , Humans , Signal Processing, Computer-Assisted , Movement
13.
Sci Rep ; 14(1): 11054, 2024 05 14.
Article in English | MEDLINE | ID: mdl-38744976

ABSTRACT

Brain machine interfaces (BMIs) can substantially improve the quality of life of elderly or disabled people. However, performing complex action sequences with a BMI system is onerous because it requires issuing commands sequentially. Fundamentally different from this, we have designed a BMI system that reads out mental planning activity and issues commands in a proactive manner. To demonstrate this, we recorded brain activity from freely-moving monkeys performing an instructed task and decoded it with an energy-efficient, small and mobile field-programmable gate array hardware decoder triggering real-time action execution on smart devices. Core of this is an adaptive decoding algorithm that can compensate for the day-by-day neuronal signal fluctuations with minimal re-calibration effort. We show that open-loop planning-ahead control is possible using signals from primary and pre-motor areas leading to significant time-gain in the execution of action sequences. This novel approach provides, thus, a stepping stone towards improved and more humane control of different smart environments with mobile brain machine interfaces.


Subject(s)
Algorithms , Brain-Computer Interfaces , Animals , Brain/physiology , Macaca mulatta
14.
J Neural Eng ; 21(3)2024 May 17.
Article in English | MEDLINE | ID: mdl-38757187

ABSTRACT

Objective.Aiming for the research on the brain-computer interface (BCI), it is crucial to design a MI-EEG recognition model, possessing a high classification accuracy and strong generalization ability, and not relying on a large number of labeled training samples.Approach.In this paper, we propose a self-supervised MI-EEG recognition method based on self-supervised learning with one-dimensional multi-task convolutional neural networks and long short-term memory (1-D MTCNN-LSTM). The model is divided into two stages: signal transform identification stage and pattern recognition stage. In the signal transform recognition phase, the signal transform dataset is recognized by the upstream 1-D MTCNN-LSTM network model. Subsequently, the backbone network from the signal transform identification phase is transferred to the pattern recognition phase. Then, it is fine-tuned using a trace amount of labeled data to finally obtain the motion recognition model.Main results.The upstream stage of this study achieves more than 95% recognition accuracy for EEG signal transforms, up to 100%. For MI-EEG pattern recognition, the model obtained recognition accuracies of 82.04% and 87.14% with F1 scores of 0.7856 and 0.839 on the datasets of BCIC-IV-2b and BCIC-IV-2a.Significance.The improved accuracy proves the superiority of the proposed method. It is prospected to be a method for accurate classification of MI-EEG in the BCI system.


Subject(s)
Brain-Computer Interfaces , Electroencephalography , Imagination , Neural Networks, Computer , Electroencephalography/methods , Humans , Imagination/physiology , Supervised Machine Learning , Pattern Recognition, Automated/methods
15.
J Neural Eng ; 21(3)2024 May 17.
Article in English | MEDLINE | ID: mdl-38722315

ABSTRACT

Objective.Electroencephalography (EEG) has been widely used in motor imagery (MI) research by virtue of its high temporal resolution and low cost, but its low spatial resolution is still a major criticism. The EEG source localization (ESL) algorithm effectively improves the spatial resolution of the signal by inverting the scalp EEG to extrapolate the cortical source signal, thus enhancing the classification accuracy.Approach.To address the problem of poor spatial resolution of EEG signals, this paper proposed a sub-band source chaotic entropy feature extraction method based on sub-band ESL. Firstly, the preprocessed EEG signals were filtered into 8 sub-bands. Each sub-band signal was source localized respectively to reveal the activation patterns of specific frequency bands of the EEG signals and the activities of specific brain regions in the MI task. Then, approximate entropy, fuzzy entropy and permutation entropy were extracted from the source signal as features to quantify the complexity and randomness of the signal. Finally, the classification of different MI tasks was achieved using support vector machine.Main result.The proposed method was validated on two MI public datasets (brain-computer interface (BCI) competition III IVa, BCI competition IV 2a) and the results showed that the classification accuracies were higher than the existing methods.Significance.The spatial resolution of the signal was improved by sub-band EEG localization in the paper, which provided a new idea for EEG MI research.


Subject(s)
Brain-Computer Interfaces , Electroencephalography , Entropy , Imagination , Electroencephalography/methods , Humans , Imagination/physiology , Nonlinear Dynamics , Algorithms , Support Vector Machine , Movement/physiology , Reproducibility of Results
16.
Commun Biol ; 7(1): 595, 2024 May 18.
Article in English | MEDLINE | ID: mdl-38762683

ABSTRACT

Dynamic mode (DM) decomposition decomposes spatiotemporal signals into basic oscillatory components (DMs). DMs can improve the accuracy of neural decoding when used with the nonlinear Grassmann kernel, compared to conventional power features. However, such kernel-based machine learning algorithms have three limitations: large computational time preventing real-time application, incompatibility with non-kernel algorithms, and low interpretability. Here, we propose a mapping function corresponding to the Grassmann kernel that explicitly transforms DMs into spatial DM (sDM) features, which can be used in any machine learning algorithm. Using electrocorticographic signals recorded during various movement and visual perception tasks, the sDM features were shown to improve the decoding accuracy and computational time compared to conventional methods. Furthermore, the components of the sDM features informative for decoding showed similar characteristics to the high-γ power of the signals, but with higher trial-to-trial reproducibility. The proposed sDM features enable fast, accurate, and interpretable neural decoding.


Subject(s)
Electrocorticography , Electrocorticography/methods , Humans , Algorithms , Signal Processing, Computer-Assisted , Male , Machine Learning , Visual Perception/physiology , Female , Reproducibility of Results , Adult , Brain-Computer Interfaces
17.
J Neural Eng ; 21(3)2024 May 20.
Article in English | MEDLINE | ID: mdl-38718788

ABSTRACT

Objective.The objective of this study is to investigate the application of various channel attention mechanisms within the domain of brain-computer interface (BCI) for motor imagery decoding. Channel attention mechanisms can be seen as a powerful evolution of spatial filters traditionally used for motor imagery decoding. This study systematically compares such mechanisms by integrating them into a lightweight architecture framework to evaluate their impact.Approach.We carefully construct a straightforward and lightweight baseline architecture designed to seamlessly integrate different channel attention mechanisms. This approach is contrary to previous works which only investigate one attention mechanism and usually build a very complex, sometimes nested architecture. Our framework allows us to evaluate and compare the impact of different attention mechanisms under the same circumstances. The easy integration of different channel attention mechanisms as well as the low computational complexity enables us to conduct a wide range of experiments on four datasets to thoroughly assess the effectiveness of the baseline model and the attention mechanisms.Results.Our experiments demonstrate the strength and generalizability of our architecture framework as well as how channel attention mechanisms can improve the performance while maintaining the small memory footprint and low computational complexity of our baseline architecture.Significance.Our architecture emphasizes simplicity, offering easy integration of channel attention mechanisms, while maintaining a high degree of generalizability across datasets, making it a versatile and efficient solution for electroencephalogram motor imagery decoding within BCIs.


Subject(s)
Attention , Brain-Computer Interfaces , Electroencephalography , Imagination , Electroencephalography/methods , Humans , Imagination/physiology , Attention/physiology , Movement/physiology
18.
Comput Biol Med ; 175: 108504, 2024 Jun.
Article in English | MEDLINE | ID: mdl-38701593

ABSTRACT

Convolutional neural network (CNN) has been widely applied in motor imagery (MI)-based brain computer interface (BCI) to decode electroencephalography (EEG) signals. However, due to the limited perceptual field of convolutional kernel, CNN only extracts features from local region without considering long-term dependencies for EEG decoding. Apart from long-term dependencies, multi-modal temporal information is equally important for EEG decoding because it can offer a more comprehensive understanding of the temporal dynamics of neural processes. In this paper, we propose a novel deep learning network that combines CNN with self-attention mechanism to encapsulate multi-modal temporal information and global dependencies. The network first extracts multi-modal temporal information from two distinct perspectives: average and variance. A shared self-attention module is then designed to capture global dependencies along these two feature dimensions. We further design a convolutional encoder to explore the relationship between average-pooled and variance-pooled features and fuse them into more discriminative features. Moreover, a data augmentation method called signal segmentation and recombination is proposed to improve the generalization capability of the proposed network. The experimental results on the BCI Competition IV-2a (BCIC-IV-2a) and BCI Competition IV-2b (BCIC-IV-2b) datasets show that our proposed method outperforms the state-of-the-art methods and achieves 4-class average accuracy of 85.03% on the BCIC-IV-2a dataset. The proposed method implies the effectiveness of multi-modal temporal information fusion in attention-based deep learning networks and provides a new perspective for MI-EEG decoding. The code is available at https://github.com/Ma-Xinzhi/EEG-TransNet.


Subject(s)
Brain-Computer Interfaces , Electroencephalography , Neural Networks, Computer , Humans , Electroencephalography/methods , Signal Processing, Computer-Assisted , Imagination/physiology , Deep Learning
19.
Sci Rep ; 14(1): 11491, 2024 05 20.
Article in English | MEDLINE | ID: mdl-38769115

ABSTRACT

Several attempts for speech brain-computer interfacing (BCI) have been made to decode phonemes, sub-words, words, or sentences using invasive measurements, such as the electrocorticogram (ECoG), during auditory speech perception, overt speech, or imagined (covert) speech. Decoding sentences from covert speech is a challenging task. Sixteen epilepsy patients with intracranially implanted electrodes participated in this study, and ECoGs were recorded during overt speech and covert speech of eight Japanese sentences, each consisting of three tokens. In particular, Transformer neural network model was applied to decode text sentences from covert speech, which was trained using ECoGs obtained during overt speech. We first examined the proposed Transformer model using the same task for training and testing, and then evaluated the model's performance when trained with overt task for decoding covert speech. The Transformer model trained on covert speech achieved an average token error rate (TER) of 46.6% for decoding covert speech, whereas the model trained on overt speech achieved a TER of 46.3% ( p > 0.05 ; d = 0.07 ) . Therefore, the challenge of collecting training data for covert speech can be addressed using overt speech. The performance of covert speech can improve by employing several overt speeches.


Subject(s)
Brain-Computer Interfaces , Electrocorticography , Speech , Humans , Female , Male , Adult , Speech/physiology , Speech Perception/physiology , Young Adult , Feasibility Studies , Epilepsy/physiopathology , Neural Networks, Computer , Middle Aged , Adolescent
20.
Sci Data ; 11(1): 546, 2024 May 28.
Article in English | MEDLINE | ID: mdl-38806531

ABSTRACT

For highly autonomous vehicles, human does not need to operate continuously vehicles. The brain-computer interface system in autonomous vehicles will highly depend on the brain states of passengers rather than those of human drivers. It is a meaningful and vital choice to translate the mental activities of human beings, essentially playing the role of advanced sensors, into safe driving. Quantifying the driving risk cognition of passengers is a basic step toward this end. This study reports the creation of an fNIRS dataset focusing on the prefrontal cortex activity in fourteen types of highly automated driving scenarios. This dataset considers age, sex and driving experience factors and contains the data collected from an 8-channel fNIRS device and the data of driving scenarios. The dataset provides data support for distinguishing the driving risk in highly automated driving scenarios via brain-computer interface systems, and it also provides the possibility of preventing potential hazards in some scenarios, in which risk remains at a high value for an extended period, before hazard occurs.


Subject(s)
Automobile Driving , Cognition , Adult , Female , Humans , Male , Automation , Brain-Computer Interfaces , Prefrontal Cortex/physiology , Spectroscopy, Near-Infrared
SELECTION OF CITATIONS
SEARCH DETAIL
...