Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 26
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
Nat Neurosci ; 27(7): 1349-1363, 2024 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-38982201

RESUMO

Flexible computation is a hallmark of intelligent behavior. However, little is known about how neural networks contextually reconfigure for different computations. In the present work, we identified an algorithmic neural substrate for modular computation through the study of multitasking artificial recurrent neural networks. Dynamical systems analyses revealed learned computational strategies mirroring the modular subtask structure of the training task set. Dynamical motifs, which are recurring patterns of neural activity that implement specific computations through dynamics, such as attractors, decision boundaries and rotations, were reused across tasks. For example, tasks requiring memory of a continuous circular variable repurposed the same ring attractor. We showed that dynamical motifs were implemented by clusters of units when the unit activation function was restricted to be positive. Cluster lesions caused modular performance deficits. Motifs were reconfigured for fast transfer learning after an initial phase of learning. This work establishes dynamical motifs as a fundamental unit of compositional computation, intermediate between neuron and network. As whole-brain studies simultaneously record activity from multiple specialized systems, the dynamical motif framework will guide questions about specialization and generalization.


Assuntos
Redes Neurais de Computação , Animais , Modelos Neurológicos , Neurônios/fisiologia , Aprendizagem/fisiologia , Algoritmos , Rede Nervosa/fisiologia
2.
Nat Commun ; 14(1): 1597, 2023 03 22.
Artigo em Inglês | MEDLINE | ID: mdl-36949048

RESUMO

Neuroscience has long been an essential driver of progress in artificial intelligence (AI). We propose that to accelerate progress in AI, we must invest in fundamental research in NeuroAI. A core component of this is the embodied Turing test, which challenges AI animal models to interact with the sensorimotor world at skill levels akin to their living counterparts. The embodied Turing test shifts the focus from those capabilities like game playing and language that are especially well-developed or uniquely human to those capabilities - inherited from over 500 million years of evolution - that are shared with all animals. Building models that can pass the embodied Turing test will provide a roadmap for the next generation of AI.


Assuntos
Inteligência Artificial , Neurociências , Animais , Humanos
3.
Neuron ; 111(5): 631-649.e10, 2023 03 01.
Artigo em Inglês | MEDLINE | ID: mdl-36630961

RESUMO

Neural activity is often described in terms of population-level factors extracted from the responses of many neurons. Factors provide a lower-dimensional description with the aim of shedding light on network computations. Yet, mechanistically, computations are performed not by continuously valued factors but by interactions among neurons that spike discretely and variably. Models provide a means of bridging these levels of description. We developed a general method for training model networks of spiking neurons by leveraging factors extracted from either data or firing-rate-based networks. In addition to providing a useful model-building framework, this formalism illustrates how reliable and continuously valued factors can arise from seemingly stochastic spiking. Our framework establishes procedures for embedding this property in network models with different levels of realism. The relationship between spikes and factors in such networks provides a foundation for interpreting (and subtly redefining) commonly used quantities such as firing rates.


Assuntos
Redes Neurais de Computação , Neurônios , Potenciais de Ação/fisiologia , Neurônios/fisiologia , Rede Nervosa/fisiologia , Modelos Neurológicos
4.
PLoS Comput Biol ; 19(1): e1010784, 2023 01.
Artigo em Inglês | MEDLINE | ID: mdl-36607933

RESUMO

The relationship between neuronal activity and computations embodied by it remains an open question. We develop a novel methodology that condenses observed neuronal activity into a quantitatively accurate, simple, and interpretable model and validate it on diverse systems and scales from single neurons in C. elegans to fMRI in humans. The model treats neuronal activity as collections of interlocking 1-dimensional trajectories. Despite their simplicity, these models accurately predict future neuronal activity and future decisions made by human participants. Moreover, the structure formed by interconnected trajectories-a scaffold-is closely related to the computational strategy of the system. We use these scaffolds to compare the computational strategy of primates and artificial systems trained on the same task to identify specific conditions under which the artificial agent learns the same strategy as the primate. The computational strategy extracted using our methodology predicts specific errors on novel stimuli. These results show that our methodology is a powerful tool for studying the relationship between computation and neuronal activity across diverse systems.


Assuntos
Caenorhabditis elegans , Modelos Neurológicos , Animais , Humanos , Caenorhabditis elegans/fisiologia , Neurônios/fisiologia , Primatas
5.
Cell ; 185(19): 3568-3587.e27, 2022 09 15.
Artigo em Inglês | MEDLINE | ID: mdl-36113428

RESUMO

Computational analysis of cellular activity has developed largely independently of modern transcriptomic cell typology, but integrating these approaches may be essential for full insight into cellular-level mechanisms underlying brain function and dysfunction. Applying this approach to the habenula (a structure with diverse, intermingled molecular, anatomical, and computational features), we identified encoding of reward-predictive cues and reward outcomes in distinct genetically defined neural populations, including TH+ cells and Tac1+ cells. Data from genetically targeted recordings were used to train an optimized nonlinear dynamical systems model and revealed activity dynamics consistent with a line attractor. High-density, cell-type-specific electrophysiological recordings and optogenetic perturbation provided supporting evidence for this model. Reverse-engineering predicted how Tac1+ cells might integrate reward history, which was complemented by in vivo experimentation. This integrated approach describes a process by which data-driven computational models of population activity can generate and frame actionable hypotheses for cell-type-specific investigation in biological systems.


Assuntos
Habenula , Recompensa , Dinâmica Populacional
6.
Neural Comput ; 34(8): 1652-1675, 2022 07 14.
Artigo em Inglês | MEDLINE | ID: mdl-35798321

RESUMO

The computational role of the abundant feedback connections in the ventral visual stream is unclear, enabling humans and nonhuman primates to effortlessly recognize objects across a multitude of viewing conditions. Prior studies have augmented feedforward convolutional neural networks (CNNs) with recurrent connections to study their role in visual processing; however, often these recurrent networks are optimized directly on neural data or the comparative metrics used are undefined for standard feedforward networks that lack these connections. In this work, we develop task-optimized convolutional recurrent (ConvRNN) network models that more correctly mimic the timing and gross neuroanatomy of the ventral pathway. Properly chosen intermediate-depth ConvRNN circuit architectures, which incorporate mechanisms of feedforward bypassing and recurrent gating, can achieve high performance on a core recognition task, comparable to that of much deeper feedforward networks. We then develop methods that allow us to compare both CNNs and ConvRNNs to finely grained measurements of primate categorization behavior and neural response trajectories across thousands of stimuli. We find that high-performing ConvRNNs provide a better match to these data than feedforward networks of any depth, predicting the precise timings at which each stimulus is behaviorally decoded from neural activation patterns. Moreover, these ConvRNN circuits consistently produce quantitatively accurate predictions of neural dynamics from V4 and IT across the entire stimulus presentation. In fact, we find that the highest-performing ConvRNNs, which best match neural and behavioral data, also achieve a strong Pareto trade-off between task performance and overall network size. Taken together, our results suggest the functional purpose of recurrence in the ventral pathway is to fit a high-performing network in cortex, attaining computational power through temporal rather than spatial complexity.


Assuntos
Análise e Desempenho de Tarefas , Percepção Visual , Animais , Humanos , Macaca mulatta/fisiologia , Redes Neurais de Computação , Reconhecimento Visual de Modelos/fisiologia , Reconhecimento Psicológico/fisiologia , Vias Visuais/fisiologia , Percepção Visual/fisiologia
7.
Annu Rev Neurosci ; 43: 249-275, 2020 07 08.
Artigo em Inglês | MEDLINE | ID: mdl-32640928

RESUMO

Significant experimental, computational, and theoretical work has identified rich structure within the coordinated activity of interconnected neural populations. An emerging challenge now is to uncover the nature of the associated computations, how they are implemented, and what role they play in driving behavior. We term this computation through neural population dynamics. If successful, this framework will reveal general motifs of neural population activity and quantitatively describe how neural population dynamics implement computations necessary for driving goal-directed behavior. Here, we start with a mathematical primer on dynamical systems theory and analytical tools necessary to apply this perspective to experimental data. Next, we highlight some recent discoveries resulting from successful application of dynamical systems. We focus on studies spanning motor control, timing, decision-making, and working memory. Finally, we briefly discuss promising recent lines of investigation and future directions for the computation through neural population dynamics framework.


Assuntos
Encéfalo/fisiologia , Biologia Computacional , Aprendizado Profundo , Rede Nervosa/fisiologia , Animais , Biologia Computacional/métodos , Humanos , Neurônios/fisiologia , Dinâmica Populacional
8.
Curr Opin Neurobiol ; 58: 229-238, 2019 10.
Artigo em Inglês | MEDLINE | ID: mdl-31670073

RESUMO

With the increasing acquisition of large-scale neural recordings comes the challenge of inferring the computations they perform and understanding how these give rise to behavior. Here, we review emerging conceptual and technological advances that begin to address this challenge, garnering insights from both biological and artificial neural networks. We argue that neural data should be recorded during rich behavioral tasks, to model cognitive processes and estimate latent behavioral variables. Careful quantification of animal movements can also provide a more complete picture of how movements shape neural dynamics and reflect changes in brain state, such as arousal or stress. Artificial neural networks (ANNs) could serve as artificial model organisms to connect neural dynamics and rich behavioral data. ANNs have already begun to reveal how a wide range of different behaviors can be implemented, generating hypotheses about how observed neural activity might drive behavior and explaining diversity in behavioral strategies.


Assuntos
Cognição , Redes Neurais de Computação , Animais , Encéfalo , Movimento
9.
Adv Neural Inf Process Syst ; 2019: 15629-15641, 2019 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-32782422

RESUMO

Task-based modeling with recurrent neural networks (RNNs) has emerged as a popular way to infer the computational function of different brain regions. These models are quantitatively assessed by comparing the low-dimensional neural representations of the model with the brain, for example using canonical correlation analysis (CCA). However, the nature of the detailed neurobiological inferences one can draw from such efforts remains elusive. For example, to what extent does training neural networks to solve common tasks uniquely determine the network dynamics, independent of modeling architectural choices? Or alternatively, are the learned dynamics highly sensitive to different model choices? Knowing the answer to these questions has strong implications for whether and how we should use task-based RNN modeling to understand brain dynamics. To address these foundational questions, we study populations of thousands of networks, with commonly used RNN architectures, trained to solve neuroscientifically motivated tasks and characterize their nonlinear dynamics. We find the geometry of the RNN representations can be highly sensitive to different network architectures, yielding a cautionary tale for measures of similarity that rely on representational geometry, such as CCA. Moreover, we find that while the geometry of neural dynamics can vary greatly across architectures, the underlying computational scaffold-the topological structure of fixed points, transitions between them, limit cycles, and linearized dynamics-often appears universal across all architectures.

10.
Adv Neural Inf Process Syst ; 32: 15696-15705, 2019 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-32782423

RESUMO

Recurrent neural networks (RNNs) are a widely used tool for modeling sequential data, yet they are often treated as inscrutable black boxes. Given a trained recurrent network, we would like to reverse engineer it-to obtain a quantitative, interpretable description of how it solves a particular task. Even for simple tasks, a detailed understanding of how recurrent networks work, or a prescription for how to develop such an understanding, remains elusive. In this work, we use tools from dynamical systems analysis to reverse engineer recurrent networks trained to perform sentiment classification, a foundational natural language processing task. Given a trained network, we find fixed points of the recurrent dynamics and linearize the nonlinear system around these fixed points. Despite their theoretical capacity to implement complex, high-dimensional computations, we find that trained networks converge to highly interpretable, low-dimensional representations. In particular, the topological structure of the fixed points and corresponding linearized dynamics reveal an approximate line attractor within the RNN, which we can use to quantitatively understand how the RNN solves the sentiment analysis task. Finally, we find this mechanism present across RNN architectures (including LSTMs, GRUs, and vanilla RNNs) trained on multiple datasets, suggesting that our findings are not unique to a particular architecture or dataset. Overall, these results demonstrate that surprisingly universal and human interpretable computations can arise across a range of recurrent networks.

11.
Nat Methods ; 15(10): 805-815, 2018 10.
Artigo em Inglês | MEDLINE | ID: mdl-30224673

RESUMO

Neuroscience is experiencing a revolution in which simultaneous recording of thousands of neurons is revealing population dynamics that are not apparent from single-neuron responses. This structure is typically extracted from data averaged across many trials, but deeper understanding requires studying phenomena detected in single trials, which is challenging due to incomplete sampling of the neural population, trial-to-trial variability, and fluctuations in action potential timing. We introduce latent factor analysis via dynamical systems, a deep learning method to infer latent dynamics from single-trial neural spiking data. When applied to a variety of macaque and human motor cortical datasets, latent factor analysis via dynamical systems accurately predicts observed behavioral variables, extracts precise firing rate estimates of neural dynamics on single trials, infers perturbations to those dynamics that correlate with behavioral choices, and combines data from non-overlapping recording sessions spanning months to improve inference of underlying dynamics.


Assuntos
Potenciais de Ação , Algoritmos , Modelos Neurológicos , Córtex Motor/fisiologia , Neurônios/fisiologia , Animais , Humanos , Masculino , Pessoa de Meia-Idade , Dinâmica Populacional , Primatas
12.
Neuron ; 98(5): 873-875, 2018 06 06.
Artigo em Inglês | MEDLINE | ID: mdl-29879388

RESUMO

Population dynamics is emerging as a language for understanding high-dimensional neural recordings. Remington et al. (2018) explore how inputs to frontal cortex modulate neural dynamics in order to implement a computation of interest.


Assuntos
Lobo Frontal , Neurônios
14.
Nat Commun ; 7: 13749, 2016 12 13.
Artigo em Inglês | MEDLINE | ID: mdl-27958268

RESUMO

A major hurdle to clinical translation of brain-machine interfaces (BMIs) is that current decoders, which are trained from a small quantity of recent data, become ineffective when neural recording conditions subsequently change. We tested whether a decoder could be made more robust to future neural variability by training it to handle a variety of recording conditions sampled from months of previously collected data as well as synthetic training data perturbations. We developed a new multiplicative recurrent neural network BMI decoder that successfully learned a large variety of neural-to-kinematic mappings and became more robust with larger training data sets. Here we demonstrate that when tested with a non-human primate preclinical BMI model, this decoder is robust under conditions that disabled a state-of-the-art Kalman filter-based decoder. These results validate a new BMI strategy in which accumulated data history are effectively harnessed, and may facilitate reliable BMI use by reducing decoder retraining downtime.


Assuntos
Interfaces Cérebro-Computador , Rede Nervosa , Animais , Mapeamento Encefálico , Macaca mulatta , Masculino
15.
eNeuro ; 3(4)2016.
Artigo em Inglês | MEDLINE | ID: mdl-27761519

RESUMO

Neural activity in monkey motor cortex (M1) and dorsal premotor cortex (PMd) can reflect a chosen movement well before that movement begins. The pattern of neural activity then changes profoundly just before movement onset. We considered the prediction, derived from formal considerations, that the transition from preparation to movement might be accompanied by a large overall change in the neural state that reflects when movement is made rather than which movement is made. Specifically, we examined "components" of the population response: time-varying patterns of activity from which each neuron's response is approximately composed. Amid the response complexity of individual M1 and PMd neurons, we identified robust response components that were "condition-invariant": their magnitude and time course were nearly identical regardless of reach direction or path. These condition-invariant response components occupied dimensions orthogonal to those occupied by the "tuned" response components. The largest condition-invariant component was much larger than any of the tuned components; i.e., it explained more of the structure in individual-neuron responses. This condition-invariant response component underwent a rapid change before movement onset. The timing of that change predicted most of the trial-by-trial variance in reaction time. Thus, although individual M1 and PMd neurons essentially always reflected which movement was made, the largest component of the population response reflected movement timing rather than movement type.


Assuntos
Atividade Motora/fisiologia , Córtex Motor/fisiologia , Neurônios/fisiologia , Potenciais de Ação , Animais , Braço/fisiologia , Eletromiografia , Macaca mulatta , Masculino , Microeletrodos , Músculo Esquelético/fisiologia , Testes Neuropsicológicos , Tempo de Reação , Fatores de Tempo
16.
Nat Neurosci ; 18(7): 1025-33, 2015 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-26075643

RESUMO

It remains an open question how neural responses in motor cortex relate to movement. We explored the hypothesis that motor cortex reflects dynamics appropriate for generating temporally patterned outgoing commands. To formalize this hypothesis, we trained recurrent neural networks to reproduce the muscle activity of reaching monkeys. Models had to infer dynamics that could transform simple inputs into temporally and spatially complex patterns of muscle activity. Analysis of trained models revealed that the natural dynamical solution was a low-dimensional oscillator that generated the necessary multiphasic commands. This solution closely resembled, at both the single-neuron and population levels, what was observed in neural recordings from the same monkeys. Notably, data and simulations agreed only when models were optimized to find simple solutions. An appealing interpretation is that the empirically observed dynamics of motor cortex may reflect a simple solution to the problem of generating temporally patterned descending commands.


Assuntos
Atividade Motora/fisiologia , Córtex Motor/fisiologia , Músculo Esquelético/fisiologia , Rede Nervosa/fisiologia , Redes Neurais de Computação , Neurônios/fisiologia , Animais , Eletromiografia , Fenômenos Eletrofisiológicos , Haplorrinos
17.
Curr Opin Neurobiol ; 25: 156-63, 2014 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-24509098

RESUMO

Many recent studies of neurons recorded from cortex reveal complex temporal dynamics. How such dynamics embody the computations that ultimately lead to behavior remains a mystery. Approaching this issue requires developing plausible hypotheses couched in terms of neural dynamics. A tool ideally suited to aid in this question is the recurrent neural network (RNN). RNNs straddle the fields of nonlinear dynamical systems and machine learning and have recently seen great advances in both theory and application. I summarize recent theoretical and technological advances and highlight an example of how RNNs helped to explain perplexing high-dimensional neurophysiological data in the prefrontal cortex.


Assuntos
Modelos Neurológicos , Rede Nervosa/fisiologia , Redes Neurais de Computação , Córtex Pré-Frontal/fisiologia , Animais , Humanos
18.
Nature ; 503(7474): 78-84, 2013 Nov 07.
Artigo em Inglês | MEDLINE | ID: mdl-24201281

RESUMO

Prefrontal cortex is thought to have a fundamental role in flexible, context-dependent behaviour, but the exact nature of the computations underlying this role remains largely unknown. In particular, individual prefrontal neurons often generate remarkably complex responses that defy deep understanding of their contribution to behaviour. Here we study prefrontal cortex activity in macaque monkeys trained to flexibly select and integrate noisy sensory inputs towards a choice. We find that the observed complexity and functional roles of single neurons are readily understood in the framework of a dynamical process unfolding at the level of the population. The population dynamics can be reproduced by a trained recurrent neural network, which suggests a previously unknown mechanism for selection and integration of task-relevant inputs. This mechanism indicates that selection and integration are two aspects of a single dynamical process unfolding within the same prefrontal circuits, and potentially provides a novel, general framework for understanding context-dependent computations.


Assuntos
Macaca mulatta/fisiologia , Modelos Neurológicos , Córtex Pré-Frontal/fisiologia , Animais , Comportamento de Escolha/fisiologia , Aprendizagem por Discriminação , Masculino , Rede Nervosa/citologia , Rede Nervosa/fisiologia , Neurônios/fisiologia , Córtex Pré-Frontal/citologia
19.
Prog Neurobiol ; 103: 214-22, 2013 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-23438479

RESUMO

Working memory is a crucial component of most cognitive tasks. Its neuronal mechanisms are still unclear despite intensive experimental and theoretical explorations. Most theoretical models of working memory assume both time-invariant neural representations and precise connectivity schemes based on the tuning properties of network neurons. A different, more recent class of models assumes randomly connected neurons that have no tuning to any particular task, and bases task performance purely on adjustment of network readout. Intermediate between these schemes are networks that start out random but are trained by a learning scheme. Experimental studies of a delayed vibrotactile discrimination task indicate that some of the neurons in prefrontal cortex are persistently tuned to the frequency of a remembered stimulus, but the majority exhibit more complex relationships to the stimulus that vary considerably across time. We compare three models, ranging from a highly organized line attractor model to a randomly connected network with chaotic activity, with data recorded during this task. The random network does a surprisingly good job of both performing the task and matching certain aspects of the data. The intermediate model, in which an initially random network is partially trained to perform the working memory task by tuning its recurrent and readout connections, provides a better description, although none of the models matches all features of the data. Our results suggest that prefrontal networks may begin in a random state relative to the task and initially rely on modified readout for task performance. With further training, however, more tuned neurons with less time-varying responses should emerge as the networks become more structured.


Assuntos
Encéfalo/fisiologia , Discriminação Psicológica/fisiologia , Memória de Curto Prazo/fisiologia , Modelos Neurológicos , Animais , Humanos
20.
Neural Comput ; 25(3): 626-49, 2013 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-23272922

RESUMO

Recurrent neural networks (RNNs) are useful tools for learning nonlinear relationships between time-varying inputs and outputs with complex temporal dependencies. Recently developed algorithms have been successful at training RNNs to perform a wide variety of tasks, but the resulting networks have been treated as black boxes: their mechanism of operation remains unknown. Here we explore the hypothesis that fixed points, both stable and unstable, and the linearized dynamics around them, can reveal crucial aspects of how RNNs implement their computations. Further, we explore the utility of linearization in areas of phase space that are not true fixed points but merely points of very slow movement. We present a simple optimization technique that is applied to trained RNNs to find the fixed and slow points of their dynamics. Linearization around these slow regions can be used to explore, or reverse-engineer, the behavior of the RNN. We describe the technique, illustrate it using simple examples, and finally showcase it on three high-dimensional RNN examples: a 3-bit flip-flop device, an input-dependent sine wave generator, and a two-point moving average. In all cases, the mechanisms of trained networks could be inferred from the sets of fixed and slow points and the linearized dynamics around them.


Assuntos
Inteligência Artificial , Redes Neurais de Computação
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...