Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 124
Filter
1.
Nat Commun ; 15(1): 4154, 2024 May 16.
Article in English | MEDLINE | ID: mdl-38755205

ABSTRACT

The precise neural mechanisms within the brain that contribute to the remarkable lifetime persistence of memory are not fully understood. Two-photon calcium imaging allows the activity of individual cells to be followed across long periods, but conventional approaches require head-fixation, which limits the type of behavior that can be studied. We present a magnetic voluntary head-fixation system that provides stable optical access to the brain during complex behavior. Compared to previous systems that used mechanical restraint, there are no moving parts and animals can engage and disengage entirely at will. This system is failsafe, easy for animals to use and reliable enough to allow long-term experiments to be routinely performed. Animals completed hundreds of trials per session of an odor discrimination task that required 2-4 s fixations. Together with a reflectance fluorescence collection scheme that increases two-photon signal and a transgenic Thy1-GCaMP6f rat line, we are able to reliably image the cellular activity in the hippocampus during behavior over long periods (median 6 months), allowing us track the same neurons over a large fraction of animals' lives (up to 19 months).


Subject(s)
Hippocampus , Neurons , Rats, Transgenic , Animals , Hippocampus/cytology , Neurons/metabolism , Rats , Male , Calcium/metabolism , Head/diagnostic imaging , Magnetics , Odorants/analysis , Female
2.
bioRxiv ; 2024 Mar 05.
Article in English | MEDLINE | ID: mdl-38464244

ABSTRACT

Different brain systems have been hypothesized to subserve multiple "experts" that compete to generate behavior. In reinforcement learning, two general processes, one model-free (MF) and one model-based (MB), are often modeled as a mixture of agents (MoA) and hypothesized to capture differences between automaticity vs. deliberation. However, shifts in strategy cannot be captured by a static MoA. To investigate such dynamics, we present the mixture-of-agents hidden Markov model (MoA-HMM), which simultaneously learns inferred action values from a set of agents and the temporal dynamics of underlying "hidden" states that capture shifts in agent contributions over time. Applying this model to a multi-step,reward-guided task in rats reveals a progression of within-session strategies: a shift from initial MB exploration to MB exploitation, and finally to reduced engagement. The inferred states predict changes in both response time and OFC neural encoding during the task, suggesting that these states are capturing real shifts in dynamics.

3.
bioRxiv ; 2024 Mar 04.
Article in English | MEDLINE | ID: mdl-38496674

ABSTRACT

Although hippocampal place cells replay nonlocal trajectories, the computational function of these events remains controversial. One hypothesis, formalized in a prominent reinforcement learning account, holds that replay plans routes to current goals. However, recent puzzling data appear to contradict this perspective by showing that replayed destinations lag current goals. These results may support an alternative hypothesis that replay updates route information to build a "cognitive map." Yet no similar theory exists to formalize this view, and it is unclear how such a map is represented or what role replay plays in computing it. We address these gaps by introducing a theory of replay that learns a map of routes to candidate goals, before reward is available or when its location may change. Our work extends the planning account to capture a general map-building function for replay, reconciling it with data, and revealing an unexpected relationship between the seemingly distinct hypotheses.

4.
bioRxiv ; 2024 Mar 22.
Article in English | MEDLINE | ID: mdl-38352540

ABSTRACT

Cognition is remarkably flexible; we are able to rapidly learn and perform many different tasks1. Theoretical modeling has shown artificial neural networks trained to perform multiple tasks will re-use representations2 and computational components3 across tasks. By composing tasks from these sub-components, an agent can flexibly switch between tasks and rapidly learn new tasks4. Yet, whether such compositionality is found in the brain is unknown. Here, we show the same subspaces of neural activity represent task-relevant information across multiple tasks, with each task compositionally combining these subspaces in a task-specific manner. We trained monkeys to switch between three compositionally related tasks. Neural recordings found task-relevant information about stimulus features and motor actions were represented in subspaces of neural activity that were shared across tasks. When monkeys performed a task, neural representations in the relevant shared sensory subspace were transformed to the relevant shared motor subspace. Subspaces were flexibly engaged as monkeys discovered the task in effect; their internal belief about the current task predicted the strength of representations in task-relevant subspaces. In sum, our findings suggest that the brain can flexibly perform multiple tasks by compositionally combining task-relevant neural representations across tasks.

5.
Cell ; 187(6): 1476-1489.e21, 2024 Mar 14.
Article in English | MEDLINE | ID: mdl-38401541

ABSTRACT

Attention filters sensory inputs to enhance task-relevant information. It is guided by an "attentional template" that represents the stimulus features that are currently relevant. To understand how the brain learns and uses templates, we trained monkeys to perform a visual search task that required them to repeatedly learn new attentional templates. Neural recordings found that templates were represented across the prefrontal and parietal cortex in a structured manner, such that perceptually neighboring templates had similar neural representations. When the task changed, a new attentional template was learned by incrementally shifting the template toward rewarded features. Finally, we found that attentional templates transformed stimulus features into a common value representation that allowed the same decision-making mechanisms to deploy attention, regardless of the identity of the template. Altogether, our results provide insight into the neural mechanisms by which the brain learns to control attention and how attention can be flexibly deployed across tasks.


Subject(s)
Attention , Decision Making , Learning , Parietal Lobe , Reward , Animals , Haplorhini
6.
Behav Res Methods ; 56(3): 1104-1122, 2024 Mar.
Article in English | MEDLINE | ID: mdl-37020082

ABSTRACT

Matrix reasoning tasks are among the most widely used measures of cognitive ability in the behavioral sciences, but the lack of matrix reasoning tests in the public domain complicates their use. Here, we present an extensive investigation and psychometric validation of the matrix reasoning item bank (MaRs-IB), an open-access set of matrix reasoning items. In a first study, we calibrate the psychometric functioning of the items in the MaRs-IB in a large sample of adult participants (N = 1501). Using additive multilevel item structure models, we establish that the MaRs-IB has many desirable psychometric properties: its items span a wide range of difficulty, possess medium-to-large levels of discrimination, and exhibit robust associations between item complexity and difficulty. However, we also find that item clones are not always psychometrically equivalent and cannot be assumed to be exchangeable. In a second study, we demonstrate how experimenters can use the estimated item parameters to design new matrix reasoning tests using optimal item assembly. Specifically, we design and validate two new sets of test forms in an independent sample of adults (N = 600). We find these new tests possess good reliability and convergent validity with an established measure of matrix reasoning. We hope that the materials and results made available here will encourage experimenters to use the MaRs-IB in their research.


Subject(s)
Cognition , Problem Solving , Adult , Humans , Reproducibility of Results , Psychometrics , Surveys and Questionnaires
7.
Proc Natl Acad Sci U S A ; 120(50): e2221510120, 2023 Dec 12.
Article in English | MEDLINE | ID: mdl-38064507

ABSTRACT

Effort-based decisions, in which people weigh potential future rewards against effort costs required to achieve those rewards involve both cognitive and physical effort, though the mechanistic relationship between them is not yet understood. Here, we use an individual differences approach to isolate and measure the computational processes underlying effort-based decisions and test the association between cognitive and physical domains. Patch foraging is an ecologically valid reward rate maximization problem with well-developed theoretical tools. We developed the Effort Foraging Task, which embedded cognitive or physical effort into patch foraging, to quantify the cost of both cognitive and physical effort indirectly, by their effects on foraging choices. Participants chose between harvesting a depleting patch, or traveling to a new patch that was costly in time and effort. Participants' exit thresholds (reflecting the reward they expected to receive by harvesting when they chose to travel to a new patch) were sensitive to cognitive and physical effort demands, allowing us to quantify the perceived effort cost in monetary terms. The indirect sequential choice style revealed effort-seeking behavior in a minority of participants (preferring high over low effort) that has apparently been missed by many previous approaches. Individual differences in cognitive and physical effort costs were positively correlated, suggesting that these are perceived and processed in common. We used canonical correlation analysis to probe the relationship of task measures to self-reported affect and motivation, and found correlations of cognitive effort with anxiety, cognitive function, behavioral activation, and self-efficacy, but no similar correlations with physical effort.


Subject(s)
Decision Making , Physical Exertion , Humans , Decision Making/physiology , Physical Exertion/physiology , Individuality , Cognition/physiology , Reward , Motivation
8.
bioRxiv ; 2023 Dec 11.
Article in English | MEDLINE | ID: mdl-38106137

ABSTRACT

We are often faced with decisions we have never encountered before, requiring us to infer possible outcomes before making a choice. Computational theories suggest that one way to make these types of decisions is by accessing and linking related experiences stored in memory. Past work has shown that such memory-based preference construction can occur at a number of different timepoints relative to the moment a decision is made. Some studies have found that memories are integrated at the time a decision is faced (reactively) while others found that memory integration happens earlier, when memories are encoded (proactively). Here we offer a resolution to this inconsistency. We demonstrate behavioral and neural evidence for both strategies and for how they tradeoff rationally depending on the associative structure of memory. Using fMRI to decode patterns of brain responses unique to categories of images in memory, we found that proactive memory access is more common and allows more efficient inference. However, participants also use reactive access when choice options are linked to more numerous memory associations. Together, these results indicate that the brain judiciously conducts proactive inference by accessing memories ahead of time in conditions when this strategy is most favorable.

9.
PLoS Comput Biol ; 19(8): e1011316, 2023 08.
Article in English | MEDLINE | ID: mdl-37624841

ABSTRACT

The ability to acquire abstract knowledge is a hallmark of human intelligence and is believed by many to be one of the core differences between humans and neural network models. Agents can be endowed with an inductive bias towards abstraction through meta-learning, where they are trained on a distribution of tasks that share some abstract structure that can be learned and applied. However, because neural networks are hard to interpret, it can be difficult to tell whether agents have learned the underlying abstraction, or alternatively statistical patterns that are characteristic of that abstraction. In this work, we compare the performance of humans and agents in a meta-reinforcement learning paradigm in which tasks are generated from abstract rules. We define a novel methodology for building "task metamers" that closely match the statistics of the abstract tasks but use a different underlying generative process, and evaluate performance on both abstract and metamer tasks. We find that humans perform better at abstract tasks than metamer tasks whereas common neural network architectures typically perform worse on the abstract tasks than the matched metamers. This work provides a foundation for characterizing differences between humans and machine learning that can be used in future work towards developing machines with more human-like behavior.


Subject(s)
Concept Formation , Machine Learning , Humans , Intelligence , Knowledge , Neural Networks, Computer
10.
Neuron ; 111(21): 3465-3478.e7, 2023 11 01.
Article in English | MEDLINE | ID: mdl-37611585

ABSTRACT

Animals frequently make decisions based on expectations of future reward ("values"). Values are updated by ongoing experience: places and choices that result in reward are assigned greater value. Yet, the specific algorithms used by the brain for such credit assignment remain unclear. We monitored accumbens dopamine as rats foraged for rewards in a complex, changing environment. We observed brief dopamine pulses both at reward receipt (scaling with prediction error) and at novel path opportunities. Dopamine also ramped up as rats ran toward reward ports, in proportion to the value at each location. By examining the evolution of these dopamine place-value signals, we found evidence for two distinct update processes: progressive propagation of value along taken paths, as in temporal difference learning, and inference of value throughout the maze, using internal models. Our results demonstrate that within rich, naturalistic environments dopamine conveys place values that are updated via multiple, complementary learning algorithms.


Subject(s)
Decision Making , Dopamine , Rats , Animals , Reward , Brain
11.
Curr Biol ; 33(16): R832-R840, 2023 08 21.
Article in English | MEDLINE | ID: mdl-37607474

ABSTRACT

There is growing interest in the relationship been AI and consciousness. Joseph LeDoux and Jonathan Birch thought it would be a good moment to put some of the big questions in this area to some leading experts. The challenge of addressing the questions they raised was taken up by Kristin Andrews, Nicky Clayton, Nathaniel Daw, Chris Frith, Hakwan Lau, Megan Peters, Susan Schneider, Anil Seth, Thomas Suddendorf, and Marie Vanderkerckhoeve.


Subject(s)
Betula , Consciousness , Humans
12.
PLoS Comput Biol ; 19(6): e1011087, 2023 06.
Article in English | MEDLINE | ID: mdl-37262023

ABSTRACT

Human behavior emerges from planning over elaborate decompositions of tasks into goals, subgoals, and low-level actions. How are these decompositions created and used? Here, we propose and evaluate a normative framework for task decomposition based on the simple idea that people decompose tasks to reduce the overall cost of planning while maintaining task performance. Analyzing 11,117 distinct graph-structured planning tasks, we find that our framework justifies several existing heuristics for task decomposition and makes predictions that can be distinguished from two alternative normative accounts. We report a behavioral study of task decomposition (N = 806) that uses 30 randomly sampled graphs, a larger and more diverse set than that of any previous behavioral study on this topic. We find that human responses are more consistent with our framework for task decomposition than alternative normative accounts and are most consistent with a heuristic-betweenness centrality-that is justified by our approach. Taken together, our results suggest the computational cost of planning is a key principle guiding the intelligent structuring of goal-directed behavior.


Subject(s)
Heuristics , Humans , Goals , Behavior
13.
bioRxiv ; 2023 Mar 19.
Article in English | MEDLINE | ID: mdl-36993482

ABSTRACT

Dopamine in the nucleus accumbens helps motivate behavior based on expectations of future reward ("values"). These values need to be updated by experience: after receiving reward, the choices that led to reward should be assigned greater value. There are multiple theoretical proposals for how this credit assignment could be achieved, but the specific algorithms that generate updated dopamine signals remain uncertain. We monitored accumbens dopamine as freely behaving rats foraged for rewards in a complex, changing environment. We observed brief pulses of dopamine both when rats received reward (scaling with prediction error), and when they encountered novel path opportunities. Furthermore, dopamine ramped up as rats ran towards reward ports, in proportion to the value at each location. By examining the evolution of these dopamine place-value signals, we found evidence for two distinct update processes: progressive propagation along taken paths, as in temporal-difference learning, and inference of value throughout the maze, using internal models. Our results demonstrate that within rich, naturalistic environments dopamine conveys place values that are updated via multiple, complementary learning algorithms.

14.
Elife ; 112022 12 02.
Article in English | MEDLINE | ID: mdl-36458809

ABSTRACT

A key question in decision-making is how humans arbitrate between competing learning and memory systems to maximize reward. We address this question by probing the balance between the effects, on choice, of incremental trial-and-error learning versus episodic memories of individual events. Although a rich literature has studied incremental learning in isolation, the role of episodic memory in decision-making has only recently drawn focus, and little research disentangles their separate contributions. We hypothesized that the brain arbitrates rationally between these two systems, relying on each in circumstances to which it is most suited, as indicated by uncertainty. We tested this hypothesis by directly contrasting contributions of episodic and incremental influence to decisions, while manipulating the relative uncertainty of incremental learning using a well-established manipulation of reward volatility. Across two large, independent samples of young adults, participants traded these influences off rationally, depending more on episodic information when incremental summaries were more uncertain. These results support the proposal that the brain optimizes the balance between different forms of learning and memory according to their relative uncertainties and elucidate the circumstances under which episodic memory informs decisions.


Subject(s)
Memory, Episodic , Young Adult , Humans , Uncertainty , Reinforcement, Psychology , Decision Making , Learning , Reward
15.
Elife ; 112022 11 14.
Article in English | MEDLINE | ID: mdl-36374181

ABSTRACT

To adapt to a changing world, we must be able to switch between rules already learned and, at other times, learn rules anew. Often we must do both at the same time, switching between known rules while also constantly re-estimating them. Here, we show these two processes, rule switching and rule learning, rely on distinct but intertwined computations, namely fast inference and slower incremental learning. To this end, we studied how monkeys switched between three rules. Each rule was compositional, requiring the animal to discriminate one of two features of a stimulus and then respond with an associated eye movement along one of two different response axes. By modeling behavior, we found the animals learned the axis of response using fast inference (rule switching) while continuously re-estimating the stimulus-response associations within an axis (rule learning). Our results shed light on the computational interactions between rule switching and rule learning, and make testable neural predictions for these interactions.


Subject(s)
Learning , Animals , Learning/physiology
16.
Nat Commun ; 13(1): 4238, 2022 07 22.
Article in English | MEDLINE | ID: mdl-35869044

ABSTRACT

Computing confidence in one's own and others' decisions is critical for social success. While there has been substantial progress in our understanding of confidence estimates about oneself, little is known about how people form confidence estimates about others. Here, we address this question by asking participants undergoing fMRI to place bets on perceptual decisions made by themselves or one of three other players of varying ability. We show that participants compute confidence in another player's decisions by combining distinct estimates of player ability and decision difficulty - allowing them to predict that a good player may get a difficult decision wrong and that a bad player may get an easy decision right. We find that this computation is associated with an interaction between brain systems implicated in decision-making (LIP) and theory of mind (TPJ and dmPFC). These results reveal an interplay between self- and other-related processes during a social confidence computation.


Subject(s)
Decision Making , Magnetic Resonance Imaging , Humans
17.
J Neurosci ; 42(29): 5730-5744, 2022 07 20.
Article in English | MEDLINE | ID: mdl-35688627

ABSTRACT

In patch foraging tasks, animals must decide whether to remain with a depleting resource or to leave it in search of a potentially better source of reward. In such tasks, animals consistently follow the general predictions of optimal foraging theory (the marginal value theorem; MVT): to leave a patch when the reward rate in the current patch depletes to the average reward rate across patches. Prior studies implicate an important role for the anterior cingulate cortex (ACC) in foraging decisions based on MVT: within single trials, ACC activity increases immediately preceding foraging decisions, and across trials, these dynamics are modulated as the value of staying in the patch depletes to the average reward rate. Here, we test whether these activity patterns reflect dynamic encoding of decision-variables and whether these signals are directly involved in decision-making. We developed a leaky accumulator model based on the MVT that generates estimates of decision variables within and across trials, and tested model predictions against ACC activity recorded from male rats performing a patch foraging task. Model predicted changes in MVT decision variables closely matched rat ACC activity. Next, we pharmacologically inactivated ACC in male rats to test the contribution of these signals to decision-making. ACC inactivation had a profound effect on rats' foraging decisions and response times (RTs) yet rats still followed the MVT decision rule. These findings indicate that the ACC encodes foraging-related variables for reasons unrelated to patch-leaving decisions.SIGNIFICANCE STATEMENT The ability to make adaptive patch-foraging decisions, to remain with a depleting resource or search for better alternatives, is critical to animal well-being. Previous studies have found that anterior cingulate cortex (ACC) activity is modulated at different points in the foraging decision process, raising questions about whether the ACC guides ongoing decisions or serves a more general purpose of regulating cognitive control. To investigate the function of the ACC in foraging, the present study developed a dynamic model of behavior and neural activity, and tested model predictions using recordings and inactivation of ACC. Findings revealed that ACC continuously signals decision variables but that these signals are more likely used to monitor and regulate ongoing processes than to guide foraging decisions.


Subject(s)
Decision Making , Gyrus Cinguli , Animals , Decision Making/physiology , Gyrus Cinguli/physiology , Male , Rats , Reward
18.
Psychol Rev ; 129(3): 564-585, 2022 04.
Article in English | MEDLINE | ID: mdl-34383523

ABSTRACT

Cognitive fatigue and boredom are two phenomenological states that reflect overt task disengagement. In this article, we present a rational analysis of the temporal structure of controlled behavior, which provides a formal account of these phenomena. We suggest that in controlling behavior, the brain faces competing behavioral and computational imperatives, and must balance them by tracking their opportunity costs over time. We use this analysis to flesh out previous suggestions that feelings associated with subjective effort, like cognitive fatigue and boredom, are the phenomenological counterparts of these opportunity cost measures, instead of reflecting the depletion of resources as has often been assumed. Specifically, we propose that both fatigue and boredom reflect the competing value of particular options that require foregoing immediate reward but can improve future performance: Fatigue reflects the value of offline computation (internal to the organism) to improve future decisions, while boredom signals the value of exploration (external in the world). We demonstrate that these accounts provide a mechanistically explicit and parsimonious account for a wide array of findings related to cognitive control, integrating and reimagining them under a single, formally rigorous framework. (PsycInfo Database Record (c) 2022 APA, all rights reserved).


Subject(s)
Boredom , Reward , Brain , Cognition , Emotions , Humans
19.
Nat Hum Behav ; 6(1): 146-154, 2022 01.
Article in English | MEDLINE | ID: mdl-34400815

ABSTRACT

A goal of computational psychiatry is to ground symptoms in basic mechanisms. Theory suggests that avoidance in anxiety disorders may reflect dysregulated mental simulation, a process for evaluating candidate actions. If so, these covert processes should have observable consequences: choices reflecting increased and biased deliberation. In two online general population samples, we examined how self-report symptoms of social anxiety disorder predict choices in a socially framed reinforcement learning task, the patent race, in which the pattern of choices reflects the content of deliberation. Using a computational model to assess learning strategy, we found that self-report social anxiety was indeed associated with increased deliberative evaluation. This effect was stronger for a particular subset of feedback ('upward counterfactual') in one of the experiments, broadly matching the biased content of rumination in social anxiety disorder, and robust to controlling for other psychiatric symptoms. These results suggest a grounding of symptoms of social anxiety disorder in more basic neuro-computational mechanisms.


Subject(s)
Anxiety/psychology , Judgment/physiology , Adult , Female , Games, Experimental , Humans , Male , Middle Aged , Young Adult
20.
Learn Mem ; 28(12): 445-456, 2021 12.
Article in English | MEDLINE | ID: mdl-34782403

ABSTRACT

When people encounter items that they believe will help them gain reward, they later remember them better than others. A recent model of emotional memory, the emotional context maintenance and retrieval model (eCMR), predicts that these effects would be stronger when stimuli that predict high and low reward can compete with each other during both encoding and retrieval. We tested this prediction in two experiments. Participants were promised £1 for remembering some pictures, but only a few pence for remembering others. Their recall of the content of the pictures they saw was tested after 1 min and, in experiment 2, also after 24 h. Memory at the immediate test showed effects of list composition. Recall of stimuli that predicted high reward was greater than of stimuli that predicted lower reward, but only when high- and low-reward items were studied and recalled together, not when they were studied and recalled separately. More high-reward items in mixed lists were forgotten over a 24-h retention interval compared with items studied in other conditions, but reward did not modulate the forgetting rate, a null effect that should be replicated in a larger sample. These results confirm eCMR's predictions, although further research is required to compare that model against alternatives.


Subject(s)
Mental Recall , Reward , Emotions , Humans
SELECTION OF CITATIONS
SEARCH DETAIL
...