Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 3 de 3
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
bioRxiv ; 2024 Feb 28.
Artigo em Inglês | MEDLINE | ID: mdl-38464188

RESUMO

In this study, we develop a novel recurrent neural network (RNN) model of pre-frontal cortex that predicts sensory inputs, actions, and outcomes at the next time step. Synaptic weights in the model are adjusted to minimize sequence prediction error, adapting a deep learning rule similar to those of large language models. The model, called Sequence Prediction Error Learning (SPEL), is a simple RNN that predicts world state at the next time step, but that differs from standard RNNs by using its own prediction errors from the previous state predictions as inputs to the hidden units of the network. We show that the time course of sequence prediction errors generated by the model closely matched the activity time courses of populations of neurons in macaque prefrontal cortex. Hidden units in the model responded to combinations of task variables and exhibited sensitivity to changing stimulus probability in ways that closely resembled monkey prefrontal neurons. Moreover, the model generated prolonged response times to infrequent, unexpected events as did monkeys. The results suggest that prefrontal cortex may generate internal models of the temporal structure of the world even during tasks that do not explicitly depend on temporal expectation, using a sequence prediction error minimization learning rule to do so. As such, the SPEL model provides a unified, general-purpose theoretical framework for modeling the lateral prefrontal cortex.

2.
Artigo em Inglês | MEDLINE | ID: mdl-38939123

RESUMO

HNN-core is a library for circuit and cellular level interpretation of non-invasive human magneto-/electro-encephalography (MEG/EEG) data. It is based on the Human Neocortical Neurosolver (HNN) software (Neymotin et al., 2020), a modeling tool designed to simulate multiscale neural mechanisms generating current dipoles in a localized patch of neocortex. HNN's foundation is a biophysically detailed neural network representing a canonical neocortical column containing populations of pyramidal and inhibitory neurons together with layer-specific exogenous synaptic drive (Figure 1 left). In addition to simulating network-level interactions, HNN produces the intracellular currents in the long apical dendrites of pyramidal cells across the cortical layers known to be responsible for macroscopic current dipole generation.

3.
PLoS Comput Biol ; 16(11): e1008342, 2020 11.
Artigo em Inglês | MEDLINE | ID: mdl-33141824

RESUMO

The brain makes flexible and adaptive responses in a complicated and ever-changing environment for an organism's survival. To achieve this, the brain needs to understand the contingencies between its sensory inputs, actions, and rewards. This is analogous to the statistical inference that has been extensively studied in the natural language processing field, where recent developments of recurrent neural networks have found many successes. We wonder whether these neural networks, the gated recurrent unit (GRU) networks in particular, reflect how the brain solves the contingency problem. Therefore, we build a GRU network framework inspired by the statistical learning approach of NLP and test it with four exemplar behavior tasks previously used in empirical studies. The network models are trained to predict future events based on past events, both comprising sensory, action, and reward events. We show the networks can successfully reproduce animal and human behavior. The networks generalize the training, perform Bayesian inference in novel conditions, and adapt their choices when event contingencies vary. Importantly, units in the network encode task variables and exhibit activity patterns that match previous neurophysiology findings. Our results suggest that the neural network approach based on statistical sequence learning may reflect the brain's computational principle underlying flexible and adaptive behaviors and serve as a useful approach to understand the brain.


Assuntos
Tomada de Decisões , Aprendizagem , Redes Neurais de Computação , Animais , Teorema de Bayes , Encéfalo/fisiologia , Biologia Computacional , Simulação por Computador , Humanos , Modelos Neurológicos , Modelos Estatísticos , Processamento de Linguagem Natural , Reforço Psicológico , Recompensa , Análise e Desempenho de Tarefas
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...