Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 4 de 4
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
iScience ; 26(9): 107571, 2023 Sep 15.
Artigo em Inglês | MEDLINE | ID: mdl-37664621

RESUMO

Affective neuroscience seeks to uncover the neural underpinnings of emotions that humans experience. However, it remains unclear whether an affective space underlies the discrete emotion categories in the human brain, and how it relates to the hypothesized affective dimensions. To address this question, we developed a voxel-wise encoding model to investigate the cortical organization of human emotions. Results revealed that the distributed emotion representations are constructed through a fundamental affective space. We further compared each dimension of this space to 14 hypothesized affective dimensions, and found that many affective dimensions are captured by the fundamental affective space. Our results suggest that emotional experiences are represented by broadly spatial overlapping cortical patterns and form smooth gradients across large areas of the cortex. This finding reveals the specific structure of the affective space and its relationship to hypothesized affective dimensions, while highlighting the distributed nature of emotional representations in the cortex.

2.
IEEE Trans Pattern Anal Mach Intell ; 45(9): 10760-10777, 2023 09.
Artigo em Inglês | MEDLINE | ID: mdl-37030711

RESUMO

Decoding human visual neural representations is a challenging task with great scientific significance in revealing vision-processing mechanisms and developing brain-like intelligent machines. Most existing methods are difficult to generalize to novel categories that have no corresponding neural data for training. The two main reasons are 1) the under-exploitation of the multimodal semantic knowledge underlying the neural data and 2) the small number of paired (stimuli-responses) training data. To overcome these limitations, this paper presents a generic neural decoding method called BraVL that uses multimodal learning of brain-visual-linguistic features. We focus on modeling the relationships between brain, visual and linguistic features via multimodal deep generative models. Specifically, we leverage the mixture-of-product-of-experts formulation to infer a latent code that enables a coherent joint generation of all three modalities. To learn a more consistent joint representation and improve the data efficiency in the case of limited brain activity data, we exploit both intra- and inter-modality mutual information maximization regularization terms. In particular, our BraVL model can be trained under various semi-supervised scenarios to incorporate the visual and textual features obtained from the extra categories. Finally, we construct three trimodal matching datasets, and the extensive experiments lead to some interesting conclusions and cognitive insights: 1) decoding novel visual categories from human brain activity is practically possible with good accuracy; 2) decoding models using the combination of visual and linguistic features perform much better than those using either of them alone; 3) visual perception may be accompanied by linguistic influences to represent the semantics of visual stimuli.


Assuntos
Algoritmos , Encéfalo , Humanos , Encéfalo/diagnóstico por imagem , Aprendizagem , Semântica , Percepção Visual
3.
IEEE Trans Med Imaging ; 42(8): 2262-2273, 2023 08.
Artigo em Inglês | MEDLINE | ID: mdl-37027550

RESUMO

Brain signal-based emotion recognition has recently attracted considerable attention since it has powerful potential to be applied in human-computer interaction. To realize the emotional interaction of intelligent systems with humans, researchers have made efforts to decode human emotions from brain imaging data. The majority of current efforts use emotion similarities (e.g., emotion graphs) or brain region similarities (e.g., brain networks) to learn emotion and brain representations. However, the relationships between emotions and brain regions are not explicitly incorporated into the representation learning process. As a result, the learned representations may not be informative enough to benefit specific tasks, e.g., emotion decoding. In this work, we propose a novel idea of graph-enhanced emotion neural decoding, which takes advantage of a bipartite graph structure to integrate the relationships between emotions and brain regions into the neural decoding process, thus helping learn better representations. Theoretical analyses conclude that the suggested emotion-brain bipartite graph inherits and generalizes the conventional emotion graphs and brain networks. Comprehensive experiments on visually evoked emotion datasets demonstrate the effectiveness and superiority of our approach.


Assuntos
Encéfalo , Emoções , Humanos , Emoções/fisiologia , Encéfalo/diagnóstico por imagem , Encéfalo/fisiologia , Mapeamento Encefálico/métodos
4.
Artigo em Inglês | MEDLINE | ID: mdl-36346867

RESUMO

Decoding emotional states from human brain activity play an important role in the brain-computer interfaces. Existing emotion decoding methods still have two main limitations: one is only decoding a single emotion category from a brain activity pattern and the decoded emotion categories are coarse-grained, which is inconsistent with the complex emotional expression of humans; the other is ignoring the discrepancy of emotion expression between the left and right hemispheres of the human brain. In this article, we propose a novel multi-view multi-label hybrid model for fine-grained emotion decoding (up to 80 emotion categories) which can learn the expressive neural representations and predict multiple emotional states simultaneously. Specifically, the generative component of our hybrid model is parameterized by a multi-view variational autoencoder, in which we regard the brain activity of left and right hemispheres and their difference as three distinct views and use the product of expert mechanism in its inference network. The discriminative component of our hybrid model is implemented by a multi-label classification network with an asymmetric focal loss. For more accurate emotion decoding, we first adopt a label-aware module for emotion-specific neural representation learning and then model the dependency of emotional states by a masked self-attention mechanism. Extensive experiments on two visually evoked emotional datasets show the superiority of our method.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...