ABSTRACT
Stroke is a neurological condition that usually results in the loss of voluntary control of body movements, making it difficult for individuals to perform activities of daily living (ADLs). Brain-computer interfaces (BCIs) integrated into robotic systems, such as motorized mini exercise bikes (MMEBs), have been demonstrated to be suitable for restoring gait-related functions. However, kinematic estimation of continuous motion in BCI systems based on electroencephalography (EEG) remains a challenge for the scientific community. This study proposes a comparative analysis to evaluate two artificial neural network (ANN)-based decoders to estimate three lower-limb kinematic parameters: x- and y-axis position of the ankle and knee joint angle during pedaling tasks. Long short-term memory (LSTM) was used as a recurrent neural network (RNN), which reached Pearson correlation coefficient (PCC) scores close to 0.58 by reconstructing kinematic parameters from the EEG features on the delta band using a time window of 250 ms. These estimates were evaluated through kinematic variance analysis, where our proposed algorithm showed promising results for identifying pedaling and rest periods, which could increase the usability of classification tasks. Additionally, negative linear correlations were found between pedaling speed and decoder performance, thereby indicating that kinematic parameters between slower speeds may be easier to estimate. The results allow concluding that the use of deep learning (DL)-based methods is feasible for the estimation of lower-limb kinematic parameters during pedaling tasks using EEG signals. This study opens new possibilities for implementing controllers most robust for MMEBs and BCIs based on continuous decoding, which may allow for maximizing the degrees of freedom and personalized rehabilitation.
ABSTRACT
This work addresses the challenge of classifying multiclass visual EEG signals into 40 classes for brain-computer interface applications using deep learning architectures. The visual multiclass classification approach offers BCI applications a significant advantage since it allows the supervision of more than one BCI interaction, considering that each class label supervises a BCI task. However, because of the nonlinearity and nonstationarity of EEG signals, using multiclass classification based on EEG features remains a significant challenge for BCI systems. In the present work, mutual information-based discriminant channel selection and minimum-norm estimate algorithms were implemented to select discriminant channels and enhance the EEG data. Hence, deep EEGNet and convolutional recurrent neural networks were separately implemented to classify the EEG data for image visualization into 40 labels. Using the k-fold cross-validation approach, average classification accuracies of 94.8% and 89.8% were obtained by implementing the aforementioned network architectures. The satisfactory results obtained with this method offer a new implementation opportunity for multitask embedded BCI applications utilizing a reduced number of both channels (<50%) and network parameters (<110 K).
Subject(s)
Algorithms , Brain-Computer Interfaces , Deep Learning , Electroencephalography , Neural Networks, Computer , Electroencephalography/methods , Humans , Signal Processing, Computer-AssistedABSTRACT
In recent years, various studies have demonstrated the potential of electroencephalographic (EEG) signals for the development of brain-computer interfaces (BCIs) in the rehabilitation of human limbs. This article is a systematic review of the state of the art and opportunities in the development of BCIs for the rehabilitation of upper and lower limbs of the human body. The systematic review was conducted in databases considering using EEG signals, interface proposals to rehabilitate upper/lower limbs using motor intention or movement assistance and utilizing virtual environments in feedback. Studies that did not specify which processing system was used were excluded. Analyses of the design processing or reviews were excluded as well. It was identified that 11 corresponded to applications to rehabilitate upper limbs, six to lower limbs, and one to both. Likewise, six combined visual/auditory feedback, two haptic/visual, and two visual/auditory/haptic. In addition, four had fully immersive virtual reality (VR), three semi-immersive VR, and 11 non-immersive VR. In summary, the studies have demonstrated that using EEG signals, and user feedback offer benefits including cost, effectiveness, better training, user motivation and there is a need to continue developing interfaces that are accessible to users, and that integrate feedback techniques.