Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 12 de 12
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
Front Neurosci ; 18: 1381572, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38872939

RESUMO

Introduction: Brain computer interfaces (BCI), which establish a direct interaction between the brain and the external device bypassing peripheral nerves, is one of the hot research areas. How to effectively convert brain intentions into instructions for controlling external devices in real-time remains a key issue that needs to be addressed in brain computer interfaces. The Riemannian geometry-based methods have achieved competitive results in decoding EEG signals. However, current Riemannian classifiers tend to overlook changes in data distribution, resulting in degenerated classification performance in cross-session and/or cross subject scenarios. Methods: This paper proposes a brain signal decoding method based on Riemannian transfer learning, fully considering the drift of the data distribution. Two Riemannian transfer learning methods based log-Euclidean metric are developed, such that historical data (source domain) can be used to aid the training of the Riemannian decoder for the current task, or data from other subjects can be used to boost the training of the decoder for the target subject. Results: The proposed methods were verified on BCI competition III, IIIa, and IV 2a datasets. Compared with the baseline that without transfer learning, the proposed algorithm demonstrates superior classification performance. In contrast to the Riemann transfer learning method based on the affine invariant Riemannian metric, the proposed method obtained comparable classification performance, but is much more computationally efficient. Discussion: With the help of proposed transfer learning method, the Riemannian classifier obtained competitive performance to existing methods in the literature. More importantly, the transfer learning process is unsupervised and time-efficient, possessing potential for online learning scenarios.

2.
Cogn Neurodyn ; 18(3): 1227-1243, 2024 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-38826659

RESUMO

The grid cells in the medial entorhinal cortex are widely recognized as a critical component of spatial cognition within the entorhinal-hippocampal neuronal circuits. To account for the hexagonal patterns, several computational models have been proposed. However, there is still considerable debate regarding the interaction between grid cells and place cells. In response, we have developed a novel grid-cell computational model based on cognitive space transformation, which established a theoretical framework of the interaction between place cells and grid cells for encoding and transforming positions between the local frame and global frame. Our model not only can generate the firing patterns of the grid cells but also reproduces the biological experiment results about the grid-cell global representation of connected environments and supports the conjecture about the underlying reason. Moreover, our model provides new insights into how grid cells and place cells integrate external and self-motion cues.

3.
Microbiol Resour Announc ; : e0002624, 2024 May 29.
Artigo em Inglês | MEDLINE | ID: mdl-38809065

RESUMO

Pseudomonas aeruginosa L3, isolated from heavy metal-contaminated soils, possesses the ability of Mn(II) oxidation. To further enhance the understanding of genes involved in Mn(II) oxidation, the complete genome of this strain was sequenced and annotated, which has a total size of 6.39 Mb with a G + C content of 66.39%.

4.
Brain Sci ; 13(5)2023 May 10.
Artigo em Inglês | MEDLINE | ID: mdl-37239253

RESUMO

The brain-computer interface (BCI) provides direct communication between human brains and machines, including robots, drones and wheelchairs, without the involvement of peripheral systems. BCI based on electroencephalography (EEG) has been applied in many fields, including aiding people with physical disabilities, rehabilitation, education and entertainment. Among the different EEG-based BCI paradigms, steady-state visual evoked potential (SSVEP)-based BCIs are known for their lower training requirements, high classification accuracy and high information transfer rate (ITR). In this article, a filter bank complex spectrum convolutional neural network (FB-CCNN) was proposed, and it achieved leading classification accuracies of 94.85 ± 6.18% and 80.58 ± 14.43%, respectively, on two open SSVEP datasets. An optimization algorithm named artificial gradient descent (AGD) was also proposed to generate and optimize the hyperparameters of the FB-CCNN. AGD also revealed correlations between different hyperparameters and their corresponding performances. It was experimentally demonstrated that FB-CCNN performed better when the hyperparameters were fixed values rather than channel number-based. In conclusion, a deep learning model named FB-CCNN and a hyperparameter-optimizing algorithm named AGD were proposed and demonstrated to be effective in classifying SSVEP through experiments. The hyperparameter design process and analysis were carried out using AGD, and advice on choosing hyperparameters for deep learning models in classifying SSVEP was provided.

5.
Brain Sci ; 13(3)2023 Mar 13.
Artigo em Inglês | MEDLINE | ID: mdl-36979293

RESUMO

The brain-computer interface (BCI), which provides a new way for humans to directly communicate with robots without the involvement of the peripheral nervous system, has recently attracted much attention. Among all the BCI paradigms, BCIs based on steady-state visual evoked potentials (SSVEPs) have the highest information transfer rate (ITR) and the shortest training time. Meanwhile, deep learning has provided an effective and feasible solution for solving complex classification problems in many fields, and many researchers have started to apply deep learning to classify SSVEP signals. However, the designs of deep learning models vary drastically. There are many hyper-parameters that influence the performance of the model in an unpredictable way. This study surveyed 31 deep learning models (2011-2023) that were used to classify SSVEP signals and analyzed their design aspects including model input, model structure, performance measure, etc. Most of the studies that were surveyed in this paper were published in 2021 and 2022. This survey is an up-to-date design guide for researchers who are interested in using deep learning models to classify SSVEP signals.

6.
IEEE Trans Cybern ; 53(8): 5178-5190, 2023 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-35700257

RESUMO

In many classification scenarios, the data to be analyzed can be naturally represented as points living on the curved Riemannian manifold of symmetric positive-definite (SPD) matrices. Due to its non-Euclidean geometry, usual Euclidean learning algorithms may deliver poor performance on such data. We propose a principled reformulation of the successful Euclidean generalized learning vector quantization (GLVQ) methodology to deal with such data, accounting for the nonlinear Riemannian geometry of the manifold through log-Euclidean metric (LEM). We first generalize GLVQ to the manifold of SPD matrices by exploiting the LEM-induced geodesic distance (GLVQ-LEM). We then extend GLVQ-LEM with metric learning. In particular, we study both 1) a more straightforward implementation of the metric learning idea by adapting metric in the space of vectorized log-transformed SPD matrices and 2) the full formulation of metric learning without matrix vectorization, thus preserving the second-order tensor structure. To obtain the distance metric in the full LEM learning (LEML) approaches, two algorithms are proposed. One method is to restrict the distance metric to be full rank, treating the distance metric tensor as an SPD matrix, and readily use the LEM framework (GLVQ-LEML-LEM). The other method is to cast no such restriction, treating the distance metric tensor as a fixed rank positive semidefinite matrix living on a quotient manifold with total space equipped with flat geometry (GLVQ-LEML-FM). Experiments on multiple datasets of different natures demonstrate the good performance of the proposed methods.

7.
Neural Netw ; 142: 105-118, 2021 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-33984734

RESUMO

In this paper, we develop a new classification method for manifold-valued data in the framework of probabilistic learning vector quantization. In many classification scenarios, the data can be naturally represented by symmetric positive definite matrices, which are inherently points that live on a curved Riemannian manifold. Due to the non-Euclidean geometry of Riemannian manifolds, traditional Euclidean machine learning algorithms yield poor results on such data. In this paper, we generalize the probabilistic learning vector quantization algorithm for data points living on the manifold of symmetric positive definite matrices equipped with Riemannian natural metric (affine-invariant metric). By exploiting the induced Riemannian distance, we derive the probabilistic learning Riemannian space quantization algorithm, obtaining the learning rule through Riemannian gradient descent. Empirical investigations on synthetic data, image data , and motor imagery electroencephalogram (EEG) data demonstrate the superior performance of the proposed method.


Assuntos
Algoritmos , Aprendizado de Máquina , Eletroencefalografia
8.
IEEE Trans Neural Netw Learn Syst ; 32(1): 281-292, 2021 01.
Artigo em Inglês | MEDLINE | ID: mdl-32203035

RESUMO

Learning vector quantization (LVQ) is a simple and efficient classification method, enjoying great popularity. However, in many classification scenarios, such as electroencephalogram (EEG) classification, the input features are represented by symmetric positive-definite (SPD) matrices that live in a curved manifold rather than vectors that live in the flat Euclidean space. In this article, we propose a new classification method for data points that live in the curved Riemannian manifolds in the framework of LVQ. The proposed method alters generalized LVQ (GLVQ) with the Euclidean distance to the one operating under the appropriate Riemannian metric. We instantiate the proposed method for the Riemannian manifold of SPD matrices equipped with the Riemannian natural metric. Empirical investigations on synthetic data and real-world motor imagery EEG data demonstrate that the performance of the proposed generalized learning Riemannian space quantization can significantly outperform the Euclidean GLVQ, generalized relevance LVQ (GRLVQ), and generalized matrix LVQ (GMLVQ). The proposed method also shows competitive performance to the state-of-the-art methods on the EEG classification of motor imagery tasks.


Assuntos
Eletroencefalografia/classificação , Aprendizado de Máquina , Algoritmos , Classificação/métodos , Sinais (Psicologia) , Humanos , Processamento de Imagem Assistida por Computador/métodos , Imaginação , Movimento , Redes Neurais de Computação , Reprodutibilidade dos Testes
9.
Neural Netw ; 126: 21-35, 2020 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-32179391

RESUMO

Spatial navigation depends on the combination of multiple sensory cues from idiothetic and allothetic sources. The computational mechanisms of mammalian brains in integrating different sensory modalities under uncertainty for navigation is enlightening for robot navigation. We propose a Bayesian attractor network model to integrate visual and vestibular inputs inspired by the spatial memory systems of mammalian brains. In the model, the pose of the robot is encoded separately by two sub-networks, namely head direction network for angle representation and grid cell network for position representation, using similar neural codes of head direction cells and grid cells observed in mammalian brains. The neural codes in each of the sub-networks are updated in a Bayesian manner by a population of integrator cells for vestibular cue integration, as well as a population of calibration cells for visual cue calibration. The conflict between vestibular cue and visual cue is resolved by the competitive dynamics between the two populations. The model, implemented on a monocular visual simultaneous localization and mapping (SLAM) system, termed NeuroBayesSLAM, successfully builds semi-metric topological maps and self-localizes in outdoor and indoor environments of difference characteristics, achieving comparable performance as previous neurobiologically inspired navigation systems but with much less computation complexity. The proposed multisensory integration method constitutes a concise yet robust and biologically plausible method for robot navigation in large environments. The model provides a viable Bayesian mechanism for multisensory integration that may pertain to other neural subsystems beyond spatial cognition.


Assuntos
Modelos Neurológicos , Robótica/métodos , Navegação Espacial , Animais , Teorema de Bayes , Encéfalo/fisiologia , Sinais (Psicologia)
10.
Neural Netw ; 114: 67-77, 2019 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-30897519

RESUMO

Brain-computer interfaces (BCIs), which control external equipment using cerebral activity, have received considerable attention recently. Translating brain activities measured by electroencephalography (EEG) into correct control commands is a critical problem in this field. Most existing EEG decoding methods separate feature extraction from classification and thus are not robust across different BCI users. In this paper, we propose to learn subject-specific features jointly with the classification rule. We develop a deep convolutional network (ConvNet) to decode EEG signals end-to-end by stacking time-frequency transformation, spatial filtering, and classification together. Our proposed ConvNet implements a joint space-time-frequency feature extraction scheme for EEG decoding. Morlet wavelet-like kernels used in our network significantly reduce the number of parameters compared with classical convolutional kernels and endow the features learned at the corresponding layer with a clear interpretation, i.e. spectral amplitude. We further utilize subject-to-subject weight transfer, which uses parameters of the networks trained for existing subjects to initialize the network for a new subject, to solve the dilemma between a large number of demanded data for training deep ConvNets and small labeled data collected in BCIs. The proposed approach is evaluated on three public data sets, obtaining superior classification performance compared with the state-of-the-art methods.


Assuntos
Interfaces Cérebro-Computador , Eletroencefalografia/métodos , Aprendizado de Máquina , Redes Neurais de Computação , Humanos
11.
Neural Netw ; 93: 76-88, 2017 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-28552507

RESUMO

Recently, ordinal regression, which predicts categories of ordinal scale, has received considerable attention. In this paper, we propose a new approach to solve ordinal regression problems within the learning vector quantization framework. It extends the previous approach termed ordinal generalized matrix learning vector quantization with a more suitable and natural cost function, leading to more intuitive parameter update rules. Moreover, in our approach the bandwidth of the prototype weights is automatically adapted. Empirical investigation on a number of datasets reveals that overall the proposed approach tends to have superior out-of-sample performance, when compared to alternative ordinal regression methods.


Assuntos
Algoritmos , Aprendizado de Máquina , Aprendizagem
12.
Neural Comput ; 27(4): 954-81, 2015 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-25734495

RESUMO

In this letter, we explore the idea of modeling slack variables in support vector machine (SVM) approaches. The study is motivated by SVM+, which models the slacks through a smooth correcting function that is determined by additional (privileged) information about the training examples not available in the test phase. We take a closer look at the meaning and consequences of smooth modeling of slacks, as opposed to determining them in an unconstrained manner through the SVM optimization program. To better understand this difference we only allow the determination and modeling of slack values on the same information--that is, using the same training input in the original input space. We also explore whether it is possible to improve classification performance by combining (in a convex combination) the original SVM slacks with the modeled ones. We show experimentally that this approach not only leads to improved generalization performance but also yields more compact, lower-complexity models. Finally, we extend this idea to the context of ordinal regression, where a natural order among the classes exists. The experimental results confirm principal findings from the binary case.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...