Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 5 de 5
Filter
Add more filters










Database
Language
Publication year range
1.
Annu Int Conf IEEE Eng Med Biol Soc ; 2021: 791-794, 2021 11.
Article in English | MEDLINE | ID: mdl-34891409

ABSTRACT

Multi-channel Electroencephalograph (EEG) signal is an important source of neural information for motor imagery (MI) limb movement intent decoding. The decoded MI movement intent often serve as potential control input for brain-computer interface (BCI) based rehabilitation robots. However, the presence of multiple dynamic artifacts in EEG signal leads to serious processing challenge that affects the BCI system in practical settings. Hence, this study propose a hybrid approach based on Low-rank spatiotemporal filtering technique for concurrent elimination of multiple EEG artifacts. Afterwards, a convolutional neural network based deep learning model (ConvNet-DL) that extracts neural information from the cleaned EEG signal for MI tasks decoding was built. The proposed method was studied in comparison with existing artifact removal methods using EEG signals of transhumeral amputees who performed five different MI tasks. Remarkably, the proposed method led to significant improvements in MI task decoding accuracy for the ConvNet-DL model in the range of 8.00~13.98%, while up to 14.38% increment was recorded in terms of the MCC: Mathew correlation coefficients at p<0.05. Also, a signal to error ratio of more than 11 dB was recorded by the proposed method.Clinical Relevance- This study showed that a combination of the proposed hybrid EEG artifact removal method and ConvNet-DL can significantly improve the decoding accuracy of MI upper limb movement tasks. Our findings may provide potential control input for BCI rehabilitation robotic systems.


Subject(s)
Brain-Computer Interfaces , Algorithms , Artifacts , Electroencephalography , Imagination
2.
Comput Methods Programs Biomed ; 206: 106121, 2021 Jul.
Article in English | MEDLINE | ID: mdl-33957375

ABSTRACT

BACKGROUND AND OBJECTIVE: Recognition of motor intention based on electroencephalogram (EEG) signals has attracted considerable research interest in the field of pattern recognition due to its notable application of non-muscular communication and control for those with severe motor disabilities. In analysis of EEG data, achieving a higher classification performance is dependent on the appropriate representation of EEG features which is mostly characterized by one unique frequency before applying a learning model. Neglecting other frequencies of EEG signals could deteriorate the recognition performance of the model because each frequency has its unique advantages. Motivated by this idea, we propose to obtain distinguishable features with different frequencies by introducing an integrated deep learning model to accurately classify multiple classes of upper limb movement intentions. METHODS: The proposed model is a combination of long short-term memory (LSTM) and stacked autoencoder (SAE). To validate the method, four high-level amputees were recruited to perform five motor intention tasks. The acquired EEG signals were first preprocessed before exploring the consequence of input representation on the performance of LSTM-SAE by feeding four frequency bands related to the tasks into the model. The learning model was further improved by t-distributed stochastic neighbor embedding (t-SNE) to eliminate feature redundancy, and to enhance the motor intention recognition. RESULTS: The experimental results of the classification performance showed that the proposed model achieves an average performance of 99.01% for accuracy, 99.10% for precision, 99.09% for recall, 99.09% for f1_score, 99.77% for specificity, and 99.0% for Cohen's kappa, across multi-subject and multi-class scenarios. Further evaluation with 2-dimensional t-SNE revealed that the signal decomposition has a distinct multi-class separability in the feature space. CONCLUSION: This study demonstrated the predominance of the proposed model in its ability to accurately classify upper limb movements from multiple classes of EEG signals, and its potential application in the development of a more intuitive and naturalistic prosthetic control.


Subject(s)
Amputees , Brain-Computer Interfaces , Deep Learning , Electroencephalography , Humans , Intention , Upper Extremity
3.
Cogn Neurodyn ; 14(5): 591-607, 2020 Oct.
Article in English | MEDLINE | ID: mdl-33014175

ABSTRACT

Current prostheses are limited in their ability to provide direct sensory feedback to users with missing limb. Several efforts have been made to restore tactile sensation to amputees but the somatotopic tactile feedback often results in unnatural sensations, and it is yet unclear how and what information the somatosensory system receives during voluntary movement. The present study proposes an efficient model of stacked sparse autoencoder and back propagation neural network for detecting sensory events from a highly flexible electrocorticography (ECoG) electrode. During the mechanical stimulation with Von Frey (VF) filament on the plantar surface of rats' foot, simultaneous recordings of tactile afferent signals were obtained from primary somatosensory cortex (S1) in the brain. In order to achieve a model with optimal performance, Particle Swarm Optimization and Adaptive Moment Estimation (Adam) were adopted to select the appropriate number of neurons, hidden layers and learning rate of each sparse auto-encoder. We evaluated the stimulus-evoked sensation by using an automated up-down (UD) method otherwise called UDReader. The assessment of tactile thresholds with VF shows that the right side of the hind-paw was significantly more sensitive at the tibia-(p = 6.50 × 10-4), followed by the saphenous-(p = 7.84 × 10-4), and sural-(p = 8.24 × 10-4). We then validated our proposed model by comparing with the state-of-the-art methods, and recorded accuracy of 98.8%, sensitivity of 96.8%, and specificity of 99.1%. Hence, we demonstrated the effectiveness of our algorithms in detecting sensory events through flexible ECoG recordings which could be a viable option in restoring somatosensory feedback.

4.
Annu Int Conf IEEE Eng Med Biol Soc ; 2020: 519-522, 2020 07.
Article in English | MEDLINE | ID: mdl-33018041

ABSTRACT

Recently, there is an increasing recognition that sensory feedback is critical for proper motor control. With the help of BCI, people with motor disabilities can communicate with their environments or control things around them by using signals extracted directly from the brain. The widely used non-invasive EEG based BCI system require that the brain signals are first preprocessed, and then translated into significant features that could be converted into commands for external control. To determine the appropriate information from the acquired brain signals is a major challenge for a reliable classification accuracy due to high data dimensions. The feature selection approach is a feasible technique to solving this problem, however, an effective selection method for determining the best set of features that would yield a significant classification performance has not yet been established for motor imagery (MI) based BCI. This paper explored the effectiveness of bio-inspired algorithms (BIA) such as Ant Colony Optimization (ACO), Genetic Algorithm (GA), Cuckoo Search Algorithm (CSA), and Modified Particle Swarm Optimization (M-PSO) on EEG and ECoG data. The performance of SVM classifier showed that M-PSO is highly efficacious with the least selected feature (SF), and converges at an acceptable speed in low iterations.


Subject(s)
Brain-Computer Interfaces , Algorithms , Electroencephalography , Humans , Imagery, Psychotherapy , Imagination
5.
Comput Biol Med ; 125: 103879, 2020 10.
Article in English | MEDLINE | ID: mdl-32890977

ABSTRACT

BACKGROUND: In medical diagnostics, breast ultrasound is an inexpensive and flexible imaging modality. The segmentation of breast ultrasounds to identify tumour regions is a challenging and complex task. The major problems of effective tumour identification are speckle noise, artefacts and low contrast. The gold standard for segmentation is manual processing; however, manual segmentation is a cumbersome task. To address this problem, the automatic multiscale superpixel method for the segmentation of breast ultrasounds is proposed. METHODS: The original breast ultrasound image was transformed into multiscaled images, and then, the multiscaled images were preprocessed. Next, a boundary efficient superpixel decomposition of the multiscaled images was created. Finally, the tumour region was generated by the boundary graph cut segmentation method. The proposed method was evaluated with 120 images from the Thammassat University Hospital database. The dataset consists of 30 malignant, 30 benign tumors, 60 fibroadenoma, and 60 cyst images. Popular metrics, such as the accuracy, sensitivity, specificity, Dice index, Jaccard index and Hausdorff distance, were used for the evaluation. RESULTS: The results indicate that the proposed method achieves segmentation accuracy of 97.3% for benign tumors, 94.2% for malignant, 96.4% for cysts and 96.7% for fibroadenomas. The results validate that the proposed model outperforms selected state-of-the-art segmentation methods. CONCLUSIONS: The proposed method outperforms selected state-of-the-art segmentation methods with an average segmentation accuracy of 94%.


Subject(s)
Breast Neoplasms , Ultrasonography, Mammary , Algorithms , Artifacts , Breast/diagnostic imaging , Breast Neoplasms/diagnostic imaging , Female , Humans , Image Processing, Computer-Assisted , Ultrasonography
SELECTION OF CITATIONS
SEARCH DETAIL
...