Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 7 de 7
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
Front Neurosci ; 17: 1086472, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-37332859

RESUMO

The advance in neuroscience and computer technology over the past decades have made brain-computer interface (BCI) a most promising area of neurorehabilitation and neurophysiology research. Limb motion decoding has gradually become a hot topic in the field of BCI. Decoding neural activity related to limb movement trajectory is considered to be of great help to the development of assistive and rehabilitation strategies for motor-impaired users. Although a variety of decoding methods have been proposed for limb trajectory reconstruction, there does not yet exist a review that covers the performance evaluation of these decoding methods. To alleviate this vacancy, in this paper, we evaluate EEG-based limb trajectory decoding methods regarding their advantages and disadvantages from a variety of perspectives. Specifically, we first introduce the differences in motor execution and motor imagery in limb trajectory reconstruction with different spaces (2D and 3D). Then, we discuss the limb motion trajectory reconstruction methods including experiment paradigm, EEG pre-processing, feature extraction and selection, decoding methods, and result evaluation. Finally, we expound on the open problem and future outlooks.

2.
Artigo em Inglês | MEDLINE | ID: mdl-37167054

RESUMO

Brain computer interface (BCI) is a system that directly uses brain neural activities to communicate with the outside world. Recently, the decoding of the human upper limb based on electroencephalogram (EEG) signals has become an important research branch of BCI. Even though existing research models are capable of decoding upper limb trajectories, the performance needs to be improved to make them more practical for real-world applications. This study is attempt to reconstruct the continuous and nonlinear multi-directional upper limb trajectory based on Chinese sign language. Here, to reconstruct the upper limb motion trajectory effectively, we propose a novel Motion Trajectory Reconstruction Transformer (MTRT) neural network that utilizes the geometric information of human joint points and EEG neural activity signals to decode the upper limb trajectory. Specifically, we use human upper limb bone geometry properties as reconstruction constraints to obtain more accurate trajectory information of the human upper limbs. Furthermore, we propose a MTRT neural network based on this constraint, which uses the shoulder, elbow, and wrist joint point information and EEG signals of brain neural activity during upper limb movement to train its parameters. To validate the model, we collected the synchronization information of EEG signals and upper limb motion joint points of 20 subjects. The experimental results show that the reconstruction model can accurately reconstruct the motion trajectory of the shoulder, elbow, and wrist of the upper limb, achieving superior performance than the compared methods. This research is very meaningful to decode the limb motion parameters for BCI, and it is inspiring for the motion decoding of other limbs and other joints.


Assuntos
Interfaces Cérebro-Computador , Humanos , Extremidade Superior , Movimento (Física) , Eletroencefalografia/métodos , Movimento
3.
Artigo em Inglês | MEDLINE | ID: mdl-37027669

RESUMO

Electroencephalography (EEG) signals classification is essential for the brain-computer interface (BCI). Recently, energy-efficient spiking neural networks (SNNs) have shown great potential in EEG analysis due to their ability to capture the complex dynamic properties of biological neurons while also processing stimulus information through precisely timed spike trains. However, most existing methods do not effectively mine the specific spatial topology of EEG channels and temporal dependencies of the encoded EEG spikes. Moreover, most are designed for specific BCI tasks and lack some generality. Hence, this study presents a novel SNN model with the customized spike-based adaptive graph convolution and long short-term memory (LSTM), termed SGLNet, for EEG-based BCIs. Specifically, we first adopt a learnable spike encoder to convert the raw EEG signals into spike trains. Then, we tailor the concepts of the multi-head adaptive graph convolution to SNN so that it can make good use of the intrinsic spatial topology information among distinct EEG channels. Finally, we design the spike-based LSTM units to further capture the temporal dependencies of the spikes. We evaluate our proposed model on two publicly available datasets from two representative fields of BCI, notably emotion recognition, and motor imagery decoding. The empirical evaluations demonstrate that SGLNet consistently outperforms existing state-of-the-art EEG classification algorithms. This work provides a new perspective for exploring high-performance SNNs for future BCIs with rich spatiotemporal dynamics.

4.
Artigo em Inglês | MEDLINE | ID: mdl-37022069

RESUMO

Sleep staging is a vital process for evaluating sleep quality and diagnosing sleep-related diseases. Most of the existing automatic sleep staging methods focus on time-domain information and often ignore the transformation relationship between sleep stages. To deal with the above problems, we propose a Temporal-Spectral fused and Attention-based deep neural Network model (TSA-Net) for automatic sleep staging, using a single-channel electroencephalogram (EEG) signal. The TSA-Net is composed of a two-stream feature extractor, feature context learning, and conditional random field (CRF). Specifically, the two-stream feature extractor module can automatically extract and fuse EEG features from time and frequency domains, considering that both temporal and spectral features can provide abundant distinguishing information for sleep staging. Subsequently, the feature context learning module learns the dependencies between features using the multi-head self-attention mechanism and outputs a preliminary sleep stage. Finally, the CRF module further applies transition rules to improve classification performance. We evaluate our model on two public datasets, Sleep-EDF-20 and Sleep-EDF-78. In terms of accuracy, the TSA-Net achieves 86.64% and 82.21% on the Fpz-Cz channel, respectively. The experimental results illustrate that our TSA-Net can optimize the performance of sleep staging and achieve better staging performance than state-of-the-art methods.

5.
Artigo em Inglês | MEDLINE | ID: mdl-34986098

RESUMO

Cognitive workload recognition is pivotal to maintain the operator's health and prevent accidents in the human-robot interaction condition. So far, the focus of workload research is mostly restricted to a single task, yet cross-task cognitive workload recognition has remained a challenge. Furthermore, when extending to a new workload condition, the discrepancy of electroencephalogram (EEG) signals across various cognitive tasks limits the generalization of the existed model. To tackle this problem, we propose to construct the EEG-based cross-task cognitive workload recognition models using domain adaptation methods in a leave-one-task-out cross-validation setting, where we view any task of each subject as a domain. Specifically, we first design a fine-grained workload paradigm including working memory and mathematic addition tasks. Then, we explore four domain adaptation methods to bridge the discrepancy between the two different tasks. Finally, based on the supporting vector machine classifier, we conduct experiments to classify the low and high workload levels on a private EEG dataset. Experimental results demonstrate that our proposed task transfer framework outperforms the non-transfer classifier with improvements of 3% to 8% in terms of mean accuracy, and the transfer joint matching (TJM) consistently achieves the best performance.


Assuntos
Eletroencefalografia , Máquina de Vetores de Suporte , Cognição , Eletroencefalografia/métodos , Humanos , Reconhecimento Psicológico , Carga de Trabalho
6.
Artigo em Inglês | MEDLINE | ID: mdl-34932480

RESUMO

Limb motion decoding is an important part of brain-computer interface (BCI) research. Among the limb motion, sign language not only contains rich semantic information and abundant maneuverable actions but also provides different executable commands. However, many researchers focus on decoding the gross motor skills, such as the decoding of ordinary motor imagery or simple upper limb movements. Here we explored the neural features and decoding of Chinese sign language from electroencephalograph (EEG) signal with motor imagery and motor execution. Sign language not only contains rich semantic information, but also has abundant maneuverable actions, and provides us with more different executable commands. In this paper, twenty subjects were instructed to perform movement execution and movement imagery based on Chinese sign language. Seven classifiers are employed to classify the selected features of sign language EEG. L1 regularization is used to learn and select features that contain more information from the mean, power spectral density, sample entropy, and brain network connectivity. The best average classification accuracy of the classifier is 89.90% (imagery sign language is 83.40%). These results have shown the feasibility of decoding between different sign languages. The source location reveals that the neural circuits involved in sign language are related to the visual contact area and the pre-movement area. Experimental evaluation shows that the proposed decoding strategy based on sign language can obtain outstanding classification results, which provides a certain reference value for the subsequent research of limb decoding based on sign language.


Assuntos
Interfaces Cérebro-Computador , China , Eletroencefalografia , Humanos , Imaginação , Aprendizado de Máquina , Movimento , Língua de Sinais
7.
Front Neurosci ; 15: 693468, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-34456670

RESUMO

Emotional singing can affect vocal performance and the audience's engagement. Chinese universities use traditional training techniques for teaching theoretical and applied knowledge. Self-imagination is the predominant training method for emotional singing. Recently, virtual reality (VR) technologies have been applied in several fields for training purposes. In this empirical comparative study, a VR training task was implemented to elicit emotions from singers and further assist them with improving their emotional singing performance. The VR training method was compared against the traditional self-imagination method. By conducting a two-stage experiment, the two methods were compared in terms of emotions' elicitation and emotional singing performance. In the first stage, electroencephalographic (EEG) data were collected from the subjects. In the second stage, self-rating reports and third-party teachers' evaluations were collected. The EEG data were analyzed by adopting the max-relevance and min-redundancy algorithm for feature selection and the support vector machine (SVM) for emotion recognition. Based on the results of EEG emotion classification and subjective scale, VR can better elicit the positive, neutral, and negative emotional states from the singers than not using this technology (i.e., self-imagination). Furthermore, due to the improvement of emotional activation, VR brings the improvement of singing performance. The VR hence appears to be an effective approach that may improve and complement the available vocal music teaching methods.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...