Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 5 de 5
Filter
Add more filters










Database
Language
Publication year range
1.
J Neuroeng Rehabil ; 19(1): 95, 2022 09 06.
Article in English | MEDLINE | ID: mdl-36068570

ABSTRACT

BACKGROUND: The brain-computer interface (BCI) race at the Cybathlon championship, for people with disabilities, challenges teams (BCI researchers, developers and pilots with spinal cord injury) to control an avatar on a virtual racetrack without movement. Here we describe the training regime and results of the Ulster University BCI Team pilot who has tetraplegia and was trained to use an electroencephalography (EEG)-based BCI intermittently over 10 years, to compete in three Cybathlon events. METHODS: A multi-class, multiple binary classifier framework was used to decode three kinesthetically imagined movements (motor imagery of left arm, right arm, and feet), and relaxed state. Three game paradigms were used for training i.e., NeuroSensi, Triad, and Cybathlon Race: BrainDriver. An evaluation of the pilot's performance is presented for two Cybathlon competition training periods-spanning 20 sessions over 5 weeks prior to the 2019 competition, and 25 sessions over 5 weeks in the run up to the 2020 competition. RESULTS: Having participated in BCI training in 2009 and competed in Cybathlon 2016, the experienced pilot achieved high two-class accuracy on all class pairs when training began in 2019 (decoding accuracy > 90%, resulting in efficient NeuroSensi and Triad game control). The BrainDriver performance (i.e., Cybathlon race completion time) improved significantly during the training period, leading up to the competition day, ranging from 274-156 s (255 ± 24 s to 191 ± 14 s mean ± std), over 17 days (10 sessions) in 2019, and from 230-168 s (214 ± 14 s to 181 ± 4 s), over 18 days (13 sessions) in 2020. However, on both competition occasions, towards the race date, the performance deteriorated significantly. CONCLUSIONS: The training regime and framework applied were highly effective in achieving competitive race completion times. The BCI framework did not cope with significant deviation in electroencephalography (EEG) observed in the sessions occurring shortly before and during the race day. Changes in cognitive state as a result of stress, arousal level, and fatigue, associated with the competition challenge and performance pressure, were likely contributing factors to the non-stationary effects that resulted in the BCI and pilot achieving suboptimal performance on race day. Trial registration not registered.


Subject(s)
Brain-Computer Interfaces , Disabled Persons , Electroencephalography/methods , Humans , Imagery, Psychotherapy , Quadriplegia
2.
Sensors (Basel) ; 20(16)2020 Aug 17.
Article in English | MEDLINE | ID: mdl-32824559

ABSTRACT

Classification of electroencephalography (EEG) signals corresponding to imagined speech production is important for the development of a direct-speech brain-computer interface (DS-BCI). Deep learning (DL) has been utilized with great success across several domains. However, it remains an open question whether DL methods provide significant advances over traditional machine learning (ML) approaches for classification of imagined speech. Furthermore, hyperparameter (HP) optimization has been neglected in DL-EEG studies, resulting in the significance of its effects remaining uncertain. In this study, we aim to improve classification of imagined speech EEG by employing DL methods while also statistically evaluating the impact of HP optimization on classifier performance. We trained three distinct convolutional neural networks (CNN) on imagined speech EEG using a nested cross-validation approach to HP optimization. Each of the CNNs evaluated was designed specifically for EEG decoding. An imagined speech EEG dataset consisting of both words and vowels facilitated training on both sets independently. CNN results were compared with three benchmark ML methods: Support Vector Machine, Random Forest and regularized Linear Discriminant Analysis. Intra- and inter-subject methods of HP optimization were tested and the effects of HPs statistically analyzed. Accuracies obtained by the CNNs were significantly greater than the benchmark methods when trained on both datasets (words: 24.97%, p < 1 × 10-7, chance: 16.67%; vowels: 30.00%, p < 1 × 10-7, chance: 20%). The effects of varying HP values, and interactions between HPs and the CNNs were both statistically significant. The results of HP optimization demonstrate how critical it is for training CNNs to decode imagined speech.


Subject(s)
Brain-Computer Interfaces , Deep Learning , Speech , Electroencephalography , Machine Learning , Neural Networks, Computer
3.
Front Neurosci ; 14: 578, 2020.
Article in English | MEDLINE | ID: mdl-32714127

ABSTRACT

Background: Stroke is a disease with a high associated disability burden. Robotic-assisted gait training offers an opportunity for the practice intensity levels associated with good functional walking outcomes in this population. Neural interfacing technology, electroencephalography (EEG), or electromyography (EMG) can offer new strategies for robotic gait re-education after a stroke by promoting more active engagement in movement intent and/or neurophysiological feedback. Objectives: This study identifies the current state-of-the-art and the limitations in direct neural interfacing with robotic gait devices in stroke rehabilitation. Methods: A pre-registered systematic review was conducted using standardized search operators that included the presence of stroke and robotic gait training and neural biosignals (EMG and/or EEG) and was not limited by study type. Results: From a total of 8,899 papers identified, 13 articles were considered for the final selection. Only five of the 13 studies received a strong or moderate quality rating as a clinical study. Three studies recorded EEG activity during robotic gait, two of which used EEG for BCI purposes. While demonstrating utility for decoding kinematic and EMG-related gait data, no EEG study has been identified to close the loop between robot and human. Twelve of the studies recorded EMG activity during or after robotic walking, primarily as an outcome measure. One study used multisource information fusion from EMG, joint angle, and force to modify robotic commands in real time, with higher error rates observed during active movement. A novel study identified used EMG data during robotic gait to derive the optimal, individualized robot-driven step trajectory. Conclusions: Wide heterogeneity in the reporting and the purpose of neurobiosignal use during robotic gait training after a stroke exists. Neural interfacing with robotic gait after a stroke demonstrates promise as a future field of study. However, as a nascent area, direct neural interfacing with robotic gait after a stroke would benefit from a more standardized protocol for biosignal collection and processing and for robotic deployment. Appropriate reporting for clinical studies of this nature is also required with respect to the study type and the participants' characteristics.

4.
Front Neurorobot ; 13: 94, 2019.
Article in English | MEDLINE | ID: mdl-31798438

ABSTRACT

Background: Realization of online control of an artificial or virtual arm using information decoded from EEG normally occurs by classifying different activation states or voluntary modulation of the sensorimotor activity linked to different overt actions of the subject. However, using a more natural control scheme, such as decoding the trajectory of imagined 3D arm movements to move a prosthetic, robotic, or virtual arm has been reported in a limited amount of studies, all using offline feed-forward control schemes. Objective: In this study, we report the first attempt to realize online control of two virtual arms generating movements toward three targets/arm in 3D space. The 3D trajectory of imagined arm movements was decoded from power spectral density of mu, low beta, high beta, and low gamma EEG oscillations using multiple linear regression. The analysis was performed on a dataset recorded from three subjects in seven sessions wherein each session comprised three experimental blocks: an offline calibration block and two online feedback blocks. Target classification accuracy using predicted trajectories of the virtual arms was computed and compared with results of a filter-bank common spatial patterns (FBCSP) based multi-class classification method involving mutual information (MI) selection and linear discriminant analysis (LDA) modules. Main Results: Target classification accuracy from predicted trajectory of imagined 3D arm movements in the offline runs for two subjects (mean 45%, std 5%) was significantly higher (p < 0.05) than chance level (33.3%). Nevertheless, the accuracy during real-time control of the virtual arms using the trajectory decoded directly from EEG was in the range of chance level (33.3%). However, the results of two subjects show that false-positive feedback may increase the accuracy in closed-loop. The FBCSP based multi-class classification method distinguished imagined movements of left and right arm with reasonable accuracy for two of the three subjects (mean 70%, std 5% compared to 50% chance level). However, classification of the imagined arm movement toward three targets was not successful with the FBCSP classifier as the achieved accuracy (mean 33%, std 5%) was similar to the chance level (33.3%). Sub-optimal components of the multi-session experimental paradigm were identified, and an improved paradigm proposed.

5.
Front Neurosci ; 12: 130, 2018.
Article in English | MEDLINE | ID: mdl-29615848

ABSTRACT

Objective: To date, motion trajectory prediction (MTP) of a limb from non-invasive electroencephalography (EEG) has relied, primarily, on band-pass filtered samples of EEG potentials i.e., the potential time-series model. Most MTP studies involve decoding 2D and 3D arm movements i.e., executed arm movements. Decoding of observed or imagined 3D movements has been demonstrated with limited success and only reported in a few studies. MTP studies normally use EEG potentials filtered in the low delta (~1 Hz) band for reconstructing the trajectory of an executed or an imagined/observed movement. In contrast to MTP, multiclass classification based sensorimotor rhythm brain-computer interfaces aim to classify movements using the power spectral density of mu (8-12 Hz) and beta (12-28 Hz) bands. Approach: We investigated if replacing the standard potentials time-series input with a power spectral density based bandpower time-series improves trajectory decoding accuracy of kinesthetically imagined 3D hand movement tasks (i.e., imagined 3D trajectory of the hand joint) and whether imagined 3D hand movements kinematics are encoded also in mu and beta bands. Twelve naïve subjects were asked to generate or imagine generating pointing movements with their right dominant arm to four targets distributed in 3D space in synchrony with an auditory cue (beep). Main results: Using the bandpower time-series based model, the highest decoding accuracy for motor execution was observed in mu and beta bands whilst for imagined movements the low gamma (28-40 Hz) band was also observed to improve decoding accuracy for some subjects. Moreover, for both (executed and imagined) movements, the bandpower time-series model with mu, beta, and low gamma bands produced significantly higher reconstruction accuracy than the commonly used potential time-series model and delta oscillations. Significance: Contrary to many studies that investigated only executed hand movements and recommend using delta oscillations for decoding directional information of a single limb joint, our findings suggest that motor kinematics for imagined movements are reflected mostly in power spectral density of mu, beta and low gamma bands, and that these bands may be most informative for decoding 3D trajectories of imagined limb movements.

SELECTION OF CITATIONS
SEARCH DETAIL
...