Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 8 de 8
Filter
Add more filters










Database
Language
Publication year range
1.
Sci Robot ; 7(69): eabo0235, 2022 08 31.
Article in English | MEDLINE | ID: mdl-36044556

ABSTRACT

Learning to combine control at the level of joint torques with longer-term goal-directed behavior is a long-standing challenge for physically embodied artificial agents. Intelligent behavior in the physical world unfolds across multiple spatial and temporal scales: Although movements are ultimately executed at the level of instantaneous muscle tensions or joint torques, they must be selected to serve goals that are defined on much longer time scales and that often involve complex interactions with the environment and other agents. Recent research has demonstrated the potential of learning-based approaches applied to the respective problems of complex movement, long-term planning, and multiagent coordination. However, their integration traditionally required the design and optimization of independent subsystems and remains challenging. In this work, we tackled the integration of motor control and long-horizon decision-making in the context of simulated humanoid football, which requires agile motor control and multiagent coordination. We optimized teams of agents to play simulated football via reinforcement learning, constraining the solution space to that of plausible movements learned using human motion capture data. They were trained to maximize several environment rewards and to imitate pretrained football-specific skills if doing so led to improved performance. The result is a team of coordinated humanoid football players that exhibit complex behavior at different scales, quantified by a range of analysis and statistics, including those used in real-world sport analytics. Our work constitutes a complete demonstration of learned integrated decision-making at multiple scales in a multiagent setting.


Subject(s)
Football , Soccer , Humans , Learning , Movement , Reinforcement, Psychology , Soccer/physiology
2.
Nat Neurosci ; 22(7): 1159-1167, 2019 07.
Article in English | MEDLINE | ID: mdl-31182866

ABSTRACT

Recently it has been proposed that information in working memory (WM) may not always be stored in persistent neuronal activity but can be maintained in 'activity-silent' hidden states, such as synaptic efficacies endowed with short-term synaptic plasticity. To test this idea computationally, we investigated recurrent neural network models trained to perform several WM-dependent tasks, in which WM representation emerges from learning and is not a priori assumed to depend on self-sustained persistent activity. We found that short-term synaptic plasticity can support the short-term maintenance of information, provided that the memory delay period is sufficiently short. However, in tasks that require actively manipulating information, persistent activity naturally emerges from learning, and the amount of persistent activity scales with the degree of manipulation required. These results shed insight into the current debate on WM encoding and suggest that persistent activity can vary markedly between short-term memory tasks with different cognitive demands.


Subject(s)
Computer Simulation , Memory, Short-Term/physiology , Neural Networks, Computer , Learning/physiology , Neuronal Plasticity/physiology
3.
Nat Neurosci ; 22(2): 297-306, 2019 02.
Article in English | MEDLINE | ID: mdl-30643294

ABSTRACT

The brain has the ability to flexibly perform many tasks, but the underlying mechanism cannot be elucidated in traditional experimental and modeling studies designed for one task at a time. Here, we trained single network models to perform 20 cognitive tasks that depend on working memory, decision making, categorization, and inhibitory control. We found that after training, recurrent units can develop into clusters that are functionally specialized for different cognitive processes, and we introduce a simple yet effective measure to quantify relationships between single-unit neural representations of tasks. Learning often gives rise to compositionality of task representations, a critical feature for cognitive flexibility, whereby one task can be performed by recombining instructions for other tasks. Finally, networks developed mixed task selectivity similar to recorded prefrontal neurons after learning multiple tasks sequentially with a continual-learning technique. This work provides a computational platform to investigate neural representations of many cognitive tasks.


Subject(s)
Brain/physiology , Cognition/physiology , Learning/physiology , Models, Neurological , Neural Networks, Computer , Computer Simulation , Decision Making/physiology , Humans , Memory, Short-Term/physiology , Neurons/physiology
4.
Elife ; 62017 01 13.
Article in English | MEDLINE | ID: mdl-28084991

ABSTRACT

Trained neural network models, which exhibit features of neural activity recorded from behaving animals, may provide insights into the circuit mechanisms of cognitive functions through systematic analysis of network activity and connectivity. However, in contrast to the graded error signals commonly used to train networks through supervised learning, animals learn from reward feedback on definite actions through reinforcement learning. Reward maximization is particularly relevant when optimal behavior depends on an animal's internal judgment of confidence or subjective preferences. Here, we implement reward-based training of recurrent neural networks in which a value network guides learning by using the activity of the decision network to predict future reward. We show that such models capture behavioral and electrophysiological findings from well-known experimental paradigms. Our work provides a unified framework for investigating diverse cognitive and value-based computations, and predicts a role for value representation that is essential for learning, but not executing, a task.


Subject(s)
Behavior, Animal , Conditioning, Psychological , Learning , Reward , Animals , Choice Behavior , Cognition , Decision Making , Models, Neurological
5.
PLoS Comput Biol ; 12(2): e1004792, 2016 Feb.
Article in English | MEDLINE | ID: mdl-26928718

ABSTRACT

The ability to simultaneously record from large numbers of neurons in behaving animals has ushered in a new era for the study of the neural circuit mechanisms underlying cognitive functions. One promising approach to uncovering the dynamical and computational principles governing population responses is to analyze model recurrent neural networks (RNNs) that have been optimized to perform the same tasks as behaving animals. Because the optimization of network parameters specifies the desired output but not the manner in which to achieve this output, "trained" networks serve as a source of mechanistic hypotheses and a testing ground for data analyses that link neural computation to behavior. Complete access to the activity and connectivity of the circuit, and the ability to manipulate them arbitrarily, make trained networks a convenient proxy for biological circuits and a valuable platform for theoretical investigation. However, existing RNNs lack basic biological features such as the distinction between excitatory and inhibitory units (Dale's principle), which are essential if RNNs are to provide insights into the operation of biological circuits. Moreover, trained networks can achieve the same behavioral performance but differ substantially in their structure and dynamics, highlighting the need for a simple and flexible framework for the exploratory training of RNNs. Here, we describe a framework for gradient descent-based training of excitatory-inhibitory RNNs that can incorporate a variety of biological knowledge. We provide an implementation based on the machine learning library Theano, whose automatic differentiation capabilities facilitate modifications and extensions. We validate this framework by applying it to well-known experimental paradigms such as perceptual decision-making, context-dependent integration, multisensory integration, parametric working memory, and motor sequence generation. Our results demonstrate the wide range of neural activity patterns and behavior that can be modeled, and suggest a unified setting in which diverse cognitive computations and mechanisms can be studied.


Subject(s)
Cognition/physiology , Computational Biology/methods , Computer Simulation , Models, Neurological , Neurons/physiology , Algorithms , Animals , Haplorhini
6.
Proc Natl Acad Sci U S A ; 111(46): 16580-5, 2014 Nov 18.
Article in English | MEDLINE | ID: mdl-25368200

ABSTRACT

Recent anatomical tracing studies have yielded substantial amounts of data on the areal connectivity underlying distributed processing in cortex, yet the fundamental principles that govern the large-scale organization of cortex remain unknown. Here we show that functional similarity between areas as defined by the pattern of shared inputs or outputs is a key to understanding the areal network of cortex. In particular, we report a systematic relation in the monkey, human, and mouse cortex between the occurrence of connections from one area to another and their similarity distance. This characteristic relation is rooted in the wiring distance dependence of connections in the brain. We introduce a weighted, spatially embedded random network model that robustly gives rise to this structure, as well as many other spatial and topological properties observed in cortex. These include features that were not accounted for in any previous model, such as the wide range of interareal connection weights. Connections in the model emerge from an underlying distribution of spatially embedded axons, thereby integrating the two scales of cortical connectivity--individual axons and interareal pathways--into a common geometric framework. These results provide insights into the origin of large-scale connectivity in cortex and have important implications for theories of cortical organization.


Subject(s)
Cerebral Cortex/anatomy & histology , Connectome , Macaca fascicularis/anatomy & histology , Macaca mulatta/anatomy & histology , Mice/anatomy & histology , Models, Neurological , Nerve Net/anatomy & histology , Algorithms , Animals , Axons/ultrastructure , Brain Mapping/methods , Cerebral Cortex/physiology , Humans , Likelihood Functions , Organ Size , Species Specificity
7.
Article in English | MEDLINE | ID: mdl-25615142

ABSTRACT

Small-world networks-complex networks characterized by a combination of high clustering and short path lengths-are widely studied using the paradigmatic model of Watts and Strogatz (WS). Although the WS model is already quite minimal and intuitive, we describe an alternative formulation of the WS model in terms of a distance-dependent probability of connection that further simplifies, both practically and theoretically, the generation of directed and undirected WS-type small-world networks. In addition to highlighting an essential feature of the WS model that has previously been overlooked, namely the equivalence to a simple distance-dependent model, this alternative formulation makes it possible to derive exact expressions for quantities such as the degree and motif distributions and global clustering coefficient for both directed and undirected networks in terms of model parameters.


Subject(s)
Models, Theoretical , Algorithms , Cluster Analysis
8.
Phys Rev Lett ; 108(11): 116401, 2012 Mar 16.
Article in English | MEDLINE | ID: mdl-22540493

ABSTRACT

We show that the concept of bipartite fluctuations F provides a very efficient tool to detect quantum phase transitions in strongly correlated systems. Using state-of-the-art numerical techniques complemented with analytical arguments, we investigate paradigmatic examples for both quantum spins and bosons. As compared to the von Neumann entanglement entropy, we observe that F allows us to find quantum critical points with much better accuracy in one dimension. We further demonstrate that F can be successfully applied to the detection of quantum criticality in higher dimensions with no prior knowledge of the universality class of the transition. Promising approaches to experimentally access fluctuations are discussed for quantum antiferromagnets and cold gases.

SELECTION OF CITATIONS
SEARCH DETAIL
...