Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 103
Filter
1.
Nat Hum Behav ; 8(6): 1035-1043, 2024 Jun.
Article in English | MEDLINE | ID: mdl-38907029

ABSTRACT

Board, card or video games have been played by virtually every individual in the world. Games are popular because they are intuitive and fun. These distinctive qualities of games also make them ideal for studying the mind. By being intuitive, games provide a unique vantage point for understanding the inductive biases that support behaviour in more complex, ecological settings than traditional laboratory experiments. By being fun, games allow researchers to study new questions in cognition such as the meaning of 'play' and intrinsic motivation, while also supporting more extensive and diverse data collection by attracting many more participants. We describe the advantages and drawbacks of using games relative to standard laboratory-based experiments and lay out a set of recommendations on how to gain the most from using games to study cognition. We hope this Perspective will lead to a wider use of games as experimental paradigms, elevating the ecological validity, scale and robustness of research on the mind.


Subject(s)
Cognition , Video Games , Humans , Video Games/psychology , Games, Experimental , Motivation
2.
Nature ; 2024 Jun 11.
Article in English | MEDLINE | ID: mdl-38862024

ABSTRACT

Animals have exquisite control of their bodies, allowing them to perform a diverse range of behaviors. How such control is implemented by the brain, however, remains unclear. Advancing our understanding requires models that can relate principles of control to the structure of neural activity in behaving animals. To facilitate this, we built a 'virtual rodent', in which an artificial neural network actuates a biomechanically realistic model of the rat 1 in a physics simulator 2. We used deep reinforcement learning 3-5 to train the virtual agent to imitate the behavior of freely-moving rats, thus allowing us to compare neural activity recorded in real rats to the network activity of a virtual rodent mimicking their behavior. We found that neural activity in the sensorimotor striatum and motor cortex was better predicted by the virtual rodent's network activity than by any features of the real rat's movements, consistent with both regions implementing inverse dynamics 6. Furthermore, the network's latent variability predicted the structure of neural variability across behaviors and afforded robustness in a way consistent with the minimal intervention principle of optimal feedback control 7. These results demonstrate how physical simulation of biomechanically realistic virtual animals can help interpret the structure of neural activity across behavior and relate it to theoretical principles of motor control.

3.
Behav Brain Sci ; : 1-38, 2023 Nov 23.
Article in English | MEDLINE | ID: mdl-37994495

ABSTRACT

Psychologists and neuroscientists extensively rely on computational models for studying and analyzing the human mind. Traditionally, such computational models have been hand-designed by expert researchers. Two prominent examples are cognitive architectures and Bayesian models of cognition. While the former requires the specification of a fixed set of computational structures and a definition of how these structures interact with each other, the latter necessitates the commitment to a particular prior and a likelihood function which - in combination with Bayes' rule - determine the model's behavior. In recent years, a new framework has established itself as a promising tool for building models of human cognition: the framework of meta-learning. In contrast to the previously mentioned model classes, meta-learned models acquire their inductive biases from experience, i.e., by repeatedly interacting with an environment. However, a coherent research program around meta-learned models of cognition is still missing to this day. The purpose of this article is to synthesize previous work in this field and establish such a research program. We accomplish this by pointing out that meta-learning can be used to construct Bayes-optimal learning algorithms, allowing us to draw strong connections to the rational analysis of cognition. We then discuss several advantages of the meta-learning framework over traditional methods and reexamine prior work in the context of these new insights.

4.
iScience ; 26(11): 108047, 2023 Nov 17.
Article in English | MEDLINE | ID: mdl-37867949

ABSTRACT

The ability to perform motor actions depends, in part, on the brain's initial state. We hypothesized that initial state dependence is a more general principle and applies to cognitive control. To test this idea, we examined human single units recorded from the dorsolateral prefrontal (dlPFC) cortex and dorsal anterior cingulate cortex (dACC) during a task that interleaves motor and perceptual conflict trials, the multisource interference task (MSIT). In both brain regions, variability in pre-trial firing rates predicted subsequent reaction time (RT) on conflict trials. In dlPFC, ensemble firing rate patterns suggested the existence of domain-specific initial states, while in dACC, firing patterns were more consistent with a domain-general initial state. The deployment of shared and independent factors that we observe for conflict resolution may allow for flexible and fast responses mediated by cognitive initial states. These results also support hypotheses that place dACC hierarchically earlier than dlPFC in proactive control.

5.
Cell ; 186(22): 4885-4897.e14, 2023 10 26.
Article in English | MEDLINE | ID: mdl-37804832

ABSTRACT

Human reasoning depends on reusing pieces of information by putting them together in new ways. However, very little is known about how compositional computation is implemented in the brain. Here, we ask participants to solve a series of problems that each require constructing a whole from a set of elements. With fMRI, we find that representations of novel constructed objects in the frontal cortex and hippocampus are relational and compositional. With MEG, we find that replay assembles elements into compounds, with each replay sequence constituting a hypothesis about a possible configuration of elements. The content of sequences evolves as participants solve each puzzle, progressing from predictable to uncertain elements and gradually converging on the correct configuration. Together, these results suggest a computational bridge between apparently distinct functions of hippocampal-prefrontal circuitry and a role for generative replay in compositional inference and hypothesis testing.


Subject(s)
Hippocampus , Prefrontal Cortex , Humans , Brain , Frontal Lobe , Hippocampus/physiology , Magnetic Resonance Imaging/methods , Neural Pathways , Prefrontal Cortex/physiology
6.
Nat Hum Behav ; 7(10): 1787-1796, 2023 Oct.
Article in English | MEDLINE | ID: mdl-37679439

ABSTRACT

Effective approaches to encouraging group cooperation are still an open challenge. Here we apply recent advances in deep learning to structure networks of human participants playing a group cooperation game. We leverage deep reinforcement learning and simulation methods to train a 'social planner' capable of making recommendations to create or break connections between group members. The strategy that it develops succeeds at encouraging pro-sociality in networks of human participants (N = 208 participants in 13 groups) playing for real monetary stakes. Under the social planner, groups finished the game with an average cooperation rate of 77.7%, compared with 42.8% in static networks (N = 176 in 11 groups). In contrast to prior strategies that separate defectors from cooperators (tested here with N = 384 in 24 groups), the social planner learns to take a conciliatory approach to defectors, encouraging them to act pro-socially by moving them to small highly cooperative neighbourhoods.


Subject(s)
Cooperative Behavior , Game Theory , Humans , Social Behavior , Group Processes
7.
Nat Commun ; 14(1): 1597, 2023 03 22.
Article in English | MEDLINE | ID: mdl-36949048

ABSTRACT

Neuroscience has long been an essential driver of progress in artificial intelligence (AI). We propose that to accelerate progress in AI, we must invest in fundamental research in NeuroAI. A core component of this is the embodied Turing test, which challenges AI animal models to interact with the sensorimotor world at skill levels akin to their living counterparts. The embodied Turing test shifts the focus from those capabilities like game playing and language that are especially well-developed or uniquely human to those capabilities - inherited from over 500 million years of evolution - that are shared with all animals. Building models that can pass the embodied Turing test will provide a roadmap for the next generation of AI.


Subject(s)
Artificial Intelligence , Neurosciences , Animals , Humans
8.
Entropy (Basel) ; 24(12)2022 Dec 08.
Article in English | MEDLINE | ID: mdl-36554196

ABSTRACT

Neurons in the medial entorhinal cortex exhibit multiple, periodically organized, firing fields which collectively appear to form an internal representation of space. Neuroimaging data suggest that this grid coding is also present in other cortical areas such as the prefrontal cortex, indicating that it may be a general principle of neural functionality in the brain. In a recent analysis through the lens of dynamical systems theory, we showed how grid coding can lead to the generation of a diversity of empirically observed sequential reactivations of hippocampal place cells corresponding to traversals of cognitive maps. Here, we extend this sequence generation model by describing how the synthesis of multiple dynamical systems can support compositional cognitive computations. To empirically validate the model, we simulate two experiments demonstrating compositionality in space or in time during sequence generation. Finally, we describe several neural network architectures supporting various types of compositionality based on grid coding and highlight connections to recent work in machine learning leveraging analogous techniques.

9.
Trends Cogn Sci ; 26(12): 1013-1014, 2022 12.
Article in English | MEDLINE | ID: mdl-36150967

ABSTRACT

Rapid progress in artificial intelligence (AI) places a new spotlight on a long-standing question: how can we best develop AI to maximize its benefits to humanity? Answering this question in a satisfying and timely way represents an exciting challenge not only for AI research but also for all member disciplines of cognitive science.


Subject(s)
Artificial Intelligence , Cognitive Science , Humans
10.
Elife ; 112022 08 17.
Article in English | MEDLINE | ID: mdl-35975792

ABSTRACT

Humans and animals make predictions about the rewards they expect to receive in different situations. In formal models of behavior, these predictions are known as value representations, and they play two very different roles. Firstly, they drive choice: the expected values of available options are compared to one another, and the best option is selected. Secondly, they support learning: expected values are compared to rewards actually received, and future expectations are updated accordingly. Whether these different functions are mediated by different neural representations remains an open question. Here, we employ a recently developed multi-step task for rats that computationally separates learning from choosing. We investigate the role of value representations in the rodent orbitofrontal cortex, a key structure for value-based cognition. Electrophysiological recordings and optogenetic perturbations indicate that these representations do not directly drive choice. Instead, they signal expected reward information to a learning process elsewhere in the brain that updates choice mechanisms.


Subject(s)
Prefrontal Cortex , Rodentia , Animals , Choice Behavior/physiology , Cognition/physiology , Decision Making/physiology , Humans , Prefrontal Cortex/physiology , Rats , Reward
12.
Nat Hum Behav ; 6(10): 1398-1407, 2022 10.
Article in English | MEDLINE | ID: mdl-35789321

ABSTRACT

Building artificial intelligence (AI) that aligns with human values is an unsolved problem. Here we developed a human-in-the-loop research pipeline called Democratic AI, in which reinforcement learning is used to design a social mechanism that humans prefer by majority. A large group of humans played an online investment game that involved deciding whether to keep a monetary endowment or to share it with others for collective benefit. Shared revenue was returned to players under two different redistribution mechanisms, one designed by the AI and the other by humans. The AI discovered a mechanism that redressed initial wealth imbalance, sanctioned free riders and successfully won the majority vote. By optimizing for human preferences, Democratic AI offers a proof of concept for value-aligned policy innovation.


Subject(s)
Artificial Intelligence , Humans
14.
Nat Hum Behav ; 6(9): 1257-1267, 2022 09.
Article in English | MEDLINE | ID: mdl-35817932

ABSTRACT

'Intuitive physics' enables our pragmatic engagement with the physical world and forms a key component of 'common sense' aspects of thought. Current artificial intelligence systems pale in their understanding of intuitive physics, in comparison to even very young children. Here we address this gap between humans and machines by drawing on the field of developmental psychology. First, we introduce and open-source a machine-learning dataset designed to evaluate conceptual understanding of intuitive physics, adopting the violation-of-expectation (VoE) paradigm from developmental psychology. Second, we build a deep-learning system that learns intuitive physics directly from visual data, inspired by studies of visual cognition in children. We demonstrate that our model can learn a diverse set of physical concepts, which depends critically on object-level representations, consistent with findings from developmental psychology. We consider the implications of these results both for AI and for research on human cognition.


Subject(s)
Deep Learning , Psychology, Developmental , Artificial Intelligence , Child , Child, Preschool , Humans , Learning , Physics
15.
Neural Netw ; 145: 80-89, 2022 Jan.
Article in English | MEDLINE | ID: mdl-34735893

ABSTRACT

The intersection between neuroscience and artificial intelligence (AI) research has created synergistic effects in both fields. While neuroscientific discoveries have inspired the development of AI architectures, new ideas and algorithms from AI research have produced new ways to study brain mechanisms. A well-known example is the case of reinforcement learning (RL), which has stimulated neuroscience research on how animals learn to adjust their behavior to maximize reward. In this review article, we cover recent collaborative work between the two fields in the context of meta-learning and its extension to social cognition and consciousness. Meta-learning refers to the ability to learn how to learn, such as learning to adjust hyperparameters of existing learning algorithms and how to use existing models and knowledge to efficiently solve new tasks. This meta-learning capability is important for making existing AI systems more adaptive and flexible to efficiently solve new tasks. Since this is one of the areas where there is a gap between human performance and current AI systems, successful collaboration should produce new ideas and progress. Starting from the role of RL algorithms in driving neuroscience, we discuss recent developments in deep RL applied to modeling prefrontal cortex functions. Even from a broader perspective, we discuss the similarities and differences between social cognition and meta-learning, and finally conclude with speculations on the potential links between intelligence as endowed by model-based RL and consciousness. For future work we highlight data efficiency, autonomy and intrinsic motivation as key research areas for advancing both fields.


Subject(s)
Artificial Intelligence , Social Learning , Animals , Brain , Cognition , Consciousness , Humans , Social Cognition
16.
Nat Commun ; 12(1): 6456, 2021 11 09.
Article in English | MEDLINE | ID: mdl-34753913

ABSTRACT

In order to better understand how the brain perceives faces, it is important to know what objective drives learning in the ventral visual stream. To answer this question, we model neural responses to faces in the macaque inferotemporal (IT) cortex with a deep self-supervised generative model, ß-VAE, which disentangles sensory data into interpretable latent factors, such as gender or age. Our results demonstrate a strong correspondence between the generative factors discovered by ß-VAE and those coded by single IT neurons, beyond that found for the baselines, including the handcrafted state-of-the-art model of face perception, the Active Appearance Model, and deep classifiers. Moreover, ß-VAE is able to reconstruct novel face images using signals from just a handful of cells. Together our results imply that optimising the disentangling objective leads to representations that closely resemble those in the IT at the single unit level. This points at disentangling as a plausible learning objective for the visual brain.


Subject(s)
Deep Learning , Neurons/physiology , Brain/physiology , Cerebral Cortex/physiology , Humans , Neural Networks, Computer , Pattern Recognition, Visual/physiology , Semantics , Temporal Lobe/physiology
17.
Nat Neurosci ; 24(6): 851-862, 2021 06.
Article in English | MEDLINE | ID: mdl-33846626

ABSTRACT

Exploration, consolidation and planning depend on the generation of sequential state representations. However, these algorithms require disparate forms of sampling dynamics for optimal performance. We theorize how the brain should adapt internally generated sequences for particular cognitive functions and propose a neural mechanism by which this may be accomplished within the entorhinal-hippocampal circuit. Specifically, we demonstrate that the systematic modulation along the medial entorhinal cortex dorsoventral axis of grid population input into the hippocampus facilitates a flexible generative process that can interpolate between qualitatively distinct regimes of sequential hippocampal reactivations. By relating the emergent hippocampal activity patterns drawn from our model to empirical data, we explain and reconcile a diversity of recently observed, but apparently unrelated, phenomena such as generative cycling, diffusive hippocampal reactivations and jumping trajectory events.


Subject(s)
Entorhinal Cortex/physiology , Hippocampus/physiology , Nerve Net/physiology , Neural Networks, Computer , Animals , Humans
18.
Neuron ; 107(4): 603-616, 2020 08 19.
Article in English | MEDLINE | ID: mdl-32663439

ABSTRACT

The emergence of powerful artificial intelligence (AI) is defining new research directions in neuroscience. To date, this research has focused largely on deep neural networks trained using supervised learning in tasks such as image classification. However, there is another area of recent AI work that has so far received less attention from neuroscientists but that may have profound neuroscientific implications: deep reinforcement learning (RL). Deep RL offers a comprehensive framework for studying the interplay among learning, representation, and decision making, offering to the brain sciences a new set of research tools and a wide range of novel hypotheses. In the present review, we provide a high-level introduction to deep RL, discuss some of its initial applications to neuroscience, and survey its wider implications for research on brain and behavior, concluding with a list of opportunities for next-stage research.


Subject(s)
Deep Learning , Models, Neurological , Models, Psychological , Neural Networks, Computer , Reinforcement, Psychology , Algorithms , Decision Making , Neurosciences
19.
Nature ; 577(7792): 671-675, 2020 01.
Article in English | MEDLINE | ID: mdl-31942076

ABSTRACT

Since its introduction, the reward prediction error theory of dopamine has explained a wealth of empirical phenomena, providing a unifying framework for understanding the representation of reward and value in the brain1-3. According to the now canonical theory, reward predictions are represented as a single scalar quantity, which supports learning about the expectation, or mean, of stochastic outcomes. Here we propose an account of dopamine-based reinforcement learning inspired by recent artificial intelligence research on distributional reinforcement learning4-6. We hypothesized that the brain represents possible future rewards not as a single mean, but instead as a probability distribution, effectively representing multiple future outcomes simultaneously and in parallel. This idea implies a set of empirical predictions, which we tested using single-unit recordings from mouse ventral tegmental area. Our findings provide strong evidence for a neural realization of distributional reinforcement learning.


Subject(s)
Dopamine/metabolism , Learning/physiology , Models, Neurological , Reinforcement, Psychology , Reward , Animals , Artificial Intelligence , Dopaminergic Neurons/metabolism , GABAergic Neurons/metabolism , Mice , Optimism , Pessimism , Probability , Statistical Distributions , Ventral Tegmental Area/cytology , Ventral Tegmental Area/physiology
20.
Nat Commun ; 10(1): 5489, 2019 12 02.
Article in English | MEDLINE | ID: mdl-31792198

ABSTRACT

Advances in artificial intelligence are stimulating interest in neuroscience. However, most attention is given to discrete tasks with simple action spaces, such as board games and classic video games. Less discussed in neuroscience are parallel advances in "synthetic motor control". While motor neuroscience has recently focused on optimization of single, simple movements, AI has progressed to the generation of rich, diverse motor behaviors across multiple tasks, at humanoid scale. It is becoming clear that specific, well-motivated hierarchical design elements repeatedly arise when engineering these flexible control systems. We review these core principles of hierarchical control, relate them to hierarchy in the nervous system, and highlight research themes that we anticipate will be critical in solving challenges at this disciplinary intersection.


Subject(s)
Deep Learning , Mammals/physiology , Animals , Artificial Intelligence , Humans , Motor Activity , Neurosciences
SELECTION OF CITATIONS
SEARCH DETAIL
...