Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 2 de 2
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
Artigo em Inglês | MEDLINE | ID: mdl-37021989

RESUMO

Neuropsychological studies suggest that co-operative activities among different brain functional areas drive high-level cognitive processes. To learn the brain activities within and among different functional areas of the brain, we propose local-global-graph network (LGGNet), a novel neurologically inspired graph neural network (GNN), to learn local-global-graph (LGG) representations of electroencephalography (EEG) for brain-computer interface (BCI). The input layer of LGGNet comprises a series of temporal convolutions with multiscale 1-D convolutional kernels and kernel-level attentive fusion. It captures temporal dynamics of EEG which then serves as input to the proposed local-and global-graph-filtering layers. Using a defined neurophysiologically meaningful set of local and global graphs, LGGNet models the complex relations within and among functional areas of the brain. Under the robust nested cross-validation settings, the proposed method is evaluated on three publicly available datasets for four types of cognitive classification tasks, namely the attention, fatigue, emotion, and preference classification tasks. LGGNet is compared with state-of-the-art (SOTA) methods, such as DeepConvNet, EEGNet, R2G-STNN, TSception, regularized graph neural network (RGNN), attention-based multiscale convolutional neural network-dynamical graph convolutional network (AMCNN-DGCN), hierarchical recurrent neural network (HRNN), and GraphNet. The results show that LGGNet outperforms these methods, and the improvements are statistically significant ( ) in most cases. The results show that bringing neuroscience prior knowledge into neural network design yields an improvement of classification performance. The source code can be found at https://github.com/yi-ding-cs/LGG.

2.
Neural Netw ; 162: 34-45, 2023 May.
Artigo em Inglês | MEDLINE | ID: mdl-36878169

RESUMO

Learning knowledge from different tasks to improve the general learning performance is crucial for designing an efficient algorithm. In this work, we tackle the Multi-task Learning (MTL) problem, where the learner extracts the knowledge from different tasks simultaneously with limited data. Previous works have been designing the MTL models by taking advantage of the transfer learning techniques, requiring the knowledge of the task index, which is not realistic in many practical scenarios. In contrast, we consider the scenario that the task index is not explicitly known, under which the features extracted by the neural networks are task agnostic. To learn the task agnostic invariant features, we implement model agnostic meta-learning by leveraging the episodic training scheme to capture the common features across tasks. Apart from the episodic training scheme, we further implemented a contrastive learning objective to improve the feature compactness for a better prediction boundary in the embedding space. We conduct extensive experiments on several benchmarks compared with several recent strong baselines to demonstrate the effectiveness of the proposed method. The results showed that our method provides a practical solution for real-world scenarios, where the task index is agnostic to the learner and can outperform several strong baselines, achieving state-of-the-art performances.


Assuntos
Algoritmos , Aprendizagem , Benchmarking , Conhecimento , Redes Neurais de Computação
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...