Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 2 de 2
Filter
Add more filters











Database
Language
Publication year range
1.
Comput Biol Med ; 159: 106909, 2023 06.
Article in English | MEDLINE | ID: mdl-37071937

ABSTRACT

Speech imagery has been successfully employed in developing Brain-Computer Interfaces because it is a novel mental strategy that generates brain activity more intuitively than evoked potentials or motor imagery. There are many methods to analyze speech imagery signals, but those based on deep neural networks achieve the best results. However, more research is necessary to understand the properties and features that describe imagined phonemes and words. In this paper, we analyze the statistical properties of speech imagery EEG signals from the KaraOne dataset to design a method that classifies imagined phonemes and words. With this analysis, we propose a Capsule Neural Network that categorizes speech imagery patterns into bilabial, nasal, consonant-vocal, and vowels/iy/ and/uw/. The method is called Capsules for Speech Imagery Analysis (CapsK-SI). The input of CapsK-SI is a set of statistical features of EEG speech imagery signals. The architecture of the Capsule Neural Network is composed of a convolution layer, a primary capsule layer, and a class capsule layer. The average accuracy reached is 90.88%±7 for bilabial, 90.15%±8 for nasal, 94.02%±6 for consonant-vowel, 89.70%±8 for word-phoneme, 94.33%± for/iy/ vowel and, 94.21%±3 for/uw/ vowel detection. Finally, with the activity vectors of the CapsK-SI capsules, we generated brain maps to represent brain activity in the production of bilabial, nasal, and consonant-vocal signals.


Subject(s)
Brain-Computer Interfaces , Speech , Speech/physiology , Capsules , Electroencephalography/methods , Neural Networks, Computer , Brain/physiology , Imagination/physiology , Algorithms
2.
Cogn Process ; 23(1): 27-40, 2022 Feb.
Article in English | MEDLINE | ID: mdl-34779948

ABSTRACT

Scene analysis in video sequences is a complex task for a computer vision system. Several schemes have been addressed in this analysis, such as deep learning networks or traditional image processing methods. However, these methods require thorough training or manual adjustment of parameters to achieve accurate results. Therefore, it is necessary to develop novel methods to analyze the scenario information in video sequences. For this reason, this paper proposes a method for object segmentation in video sequences inspired by the structural layers of the visual cortex. The method is called Neuro-Inspired Object Segmentation, SegNI. SegNI has a hierarchical architecture that analyzes object features such as edges, color, and motion to generate regions that represent the objects in the scenario. The results obtained with the Video Segmentation Benchmark VSB100 dataset demonstrate that SegNI can adapt automatically to videos with scenarios that have different nature, composition, and different types of objects. Also, SegNI adapts its processing to new scenario conditions without training, which is a significant advantage over deep learning networks.


Subject(s)
Algorithms , Visual Cortex , Artificial Intelligence , Humans , Image Processing, Computer-Assisted , Motion
SELECTION OF CITATIONS
SEARCH DETAIL