Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 15 de 15
Filter
Add more filters










Publication year range
1.
Hippocampus ; 2024 May 03.
Article in English | MEDLINE | ID: mdl-38700259

ABSTRACT

Recent work has identified a critical role for the hippocampus in reward-sensitive behaviors, including motivated memory, reinforcement learning, and decision-making. Animal histology and human functional neuroimaging have shown that brain regions involved in reward processing and motivation are more interconnected with the ventral/anterior hippocampus. However, direct evidence examining gradients of structural connectivity between reward regions and the hippocampus in humans is lacking. The present study used diffusion MRI (dMRI) and probabilistic tractography to quantify the structural connectivity of the hippocampus with key reward processing regions in vivo. Using a large sample of subjects (N = 628) from the human connectome dMRI data release, we found that connectivity profiles with the hippocampus varied widely between different regions of the reward circuit. While the dopaminergic midbrain (ventral tegmental area) showed stronger connectivity with the anterior versus posterior hippocampus, the ventromedial prefrontal cortex showed stronger connectivity with the posterior hippocampus. The limbic (ventral) striatum demonstrated a more homogeneous connectivity profile along the hippocampal long axis. This is the first study to generate a probabilistic atlas of the hippocampal structural connectivity with reward-related networks, which is essential to investigating how these circuits contribute to normative adaptive behavior and maladaptive behaviors in psychiatric illness. These findings describe nuanced structural connectivity that sets the foundation to better understand how the hippocampus influences reward-guided behavior in humans.

2.
J Cogn Neurosci ; : 1-16, 2024 Apr 04.
Article in English | MEDLINE | ID: mdl-38579249

ABSTRACT

Stimulus-response habits benefit behavior by automatizing the selection of rewarding actions. However, this automaticity can come at the cost of reduced flexibility to adapt behavior when circumstances change. The goal-directed system is thought to counteract the habit system by providing the flexibility to pursue context-appropriate behaviors. The dichotomy between habitual action selection and flexible goal-directed behavior has recently been challenged by findings showing that rewards bias both action and goal selection. Here, we test whether reward reinforcement can give rise to habitual goal selection much as it gives rise to habitual action selection. We designed a rewarded, context-based perceptual discrimination task in which performance on one rule was reinforced. Using drift-diffusion models and psychometric analyses, we found that reward facilitates the initiation and execution of rules. Strikingly, we found that these biases persisted in a test phase in which rewards were no longer available. Although this facilitation is consistent with the habitual goal selection hypothesis, we did not find evidence that reward reinforcement reduced cognitive flexibility to implement alternative rules. Together, the findings suggest that reward creates a lasting impact on the selection and execution of goals but may not lead to the inflexibility characteristic of habits. Our findings demonstrate the role of the reward learning system in influencing how the goal-directed system selects and implements goals.

3.
Res Sq ; 2024 Mar 27.
Article in English | MEDLINE | ID: mdl-38585785

ABSTRACT

Anorexia Nervosa is a severe eating disorder characterized by food restriction in service of a future goal: thinness and weight loss. Prior work suggests abnormal intertemporal decision-making in anorexia, with more farsighted decisions observed in patients with acute anorexia. Prospective future thinking in daily life, or temporal orientation, promotes more farsighted delay discounting. However, whether temporal orientation is altered in anorexia, and underlies reduced delay discounting in this population, remains unclear. Further, because changes in delay discounting could reflect cognitive effects of an acute clinical state, it is important to determine whether reduced delay discounting is observed in subclinical, at-risk samples. We measured delay discounting behavior and temporal orientation in a large sample of never-diagnosed individuals at risk of anorexia nervosa. We found that farsighted delay discounting was associated with elevated risk for anorexia nervosa. Anorexia nervosa risk was also associated with increased future-oriented cognition. Future-oriented cognition mediated the difference in delay-discounting behavior between high and low-risk groups. These results were unrelated to subjective time perception and were independent of mood and anxiety symptomatology. These findings establish future-oriented cognition as a cognitive mechanism underlying altered intertemporal decision-making in individuals at risk of developing anorexia nervosa.

4.
Learn Mem ; 29(4): 93-99, 2022 04.
Article in English | MEDLINE | ID: mdl-35293323

ABSTRACT

Humans actively seek information to reduce uncertainty, providing insight on how our decisions causally affect the world. While we know that episodic memories can help support future goal-oriented behaviors, little is known about how hypothesis testing during exploration influences episodic memory. To investigate this question, we designed a hypothesis testing paradigm, in which participants figured out rules to unlock treasure chests. Using this paradigm, we characterized how hypothesis testing during exploration influenced memory for the contents of the treasure chests. We found that there was an inverted U-shaped relationship between decision uncertainty and memory, such that memory was best when decision uncertainty was moderate. An exploratory analysis also showed that surprising outcomes lead to lower memory confidence independent of accuracy. These findings support a model in which moderate decision uncertainty during hypothesis testing enhances incidental information encoding.


Subject(s)
Memory, Episodic , Humans , Uncertainty
5.
J Neurosci ; 42(8): 1529-1541, 2022 02 23.
Article in English | MEDLINE | ID: mdl-34969868

ABSTRACT

Emotional states provide an ever-present source of contextual information that should inform behavioral goals. Despite the ubiquity of emotional signals in our environment, the neural mechanisms underlying their influence on goal-directed action remains unclear. Prior work suggests that the lateral frontal pole (FPl) is uniquely positioned to integrate affective information into cognitive control representations. We used pattern similarity analysis to examine the content of representations in FPl and interconnected mid-lateral prefrontal and amygdala circuitry. Healthy participants (n = 37; n = 21 females) were scanned while undergoing an event-related Affective Go/No-Go task, which requires goal-oriented action selection during emotional processing. We found that FPl contained conjunctive emotion-action goal representations that were related to successful cognitive control during emotional processing. These representations differed from conjunctive emotion-action goal representations found in the basolateral amygdala. While robust action goal representations were present in mid-lateral prefrontal cortex, they were not modulated by emotional valence. Finally, converging results from functional connectivity and multivoxel pattern analyses indicated that FPl emotional valence signals likely originated from interconnected subgenual anterior cingulate cortex (ACC) (BA25), which was in turn functionally coupled with the amygdala. Thus, our results identify a key pathway by which internal emotional states influence goal-directed behavior.SIGNIFICANCE STATEMENT Optimal functioning in everyday life requires behavioral regulation that flexibly adapts to dynamically changing emotional states. However, precisely how emotional states influence goal-directed action remains unclear. Unveiling the neural architecture that supports emotion-goal integration is critical for our understanding of disorders such as psychopathy, which is characterized by deficits in incorporating emotional cues into goals, as well as mood and anxiety disorders, which are characterized by impaired goal-based emotion regulation. Our study identifies a key circuit through which emotional states influence goal-directed behavior. This circuitry comprised the lateral frontal pole (FPl), which represented integrated emotion-goal information, as well as interconnected amygdala and subgenual ACC, which conveyed emotional signals to FPl.


Subject(s)
Emotions , Goals , Amygdala/diagnostic imaging , Amygdala/physiology , Brain Mapping/methods , Emotions/physiology , Female , Frontal Lobe/physiology , Humans , Magnetic Resonance Imaging , Male , Prefrontal Cortex/physiology
6.
Cereb Cortex ; 32(1): 231-247, 2021 11 23.
Article in English | MEDLINE | ID: mdl-34231854

ABSTRACT

People often learn from the outcomes of their actions, even when these outcomes do not involve material rewards or punishments. How does our brain provide this flexibility? We combined behavior, computational modeling, and functional neuroimaging to probe whether learning from abstract novel outcomes harnesses the same circuitry that supports learning from familiar secondary reinforcers. Behavior and neuroimaging revealed that novel images can act as a substitute for rewards during instrumental learning, producing reliable reward-like signals in dopaminergic circuits. Moreover, we found evidence that prefrontal correlates of executive control may play a role in shaping flexible responses in reward circuits. These results suggest that learning from novel outcomes is supported by an interplay between high-level representations in prefrontal cortex and low-level responses in subcortical reward circuits. This interaction may allow for human reinforcement learning over arbitrarily abstract reward functions.


Subject(s)
Executive Function , Goals , Humans , Motivation , Prefrontal Cortex/diagnostic imaging , Prefrontal Cortex/physiology , Reinforcement, Psychology , Reward
7.
Nat Commun ; 10(1): 1073, 2019 03 06.
Article in English | MEDLINE | ID: mdl-30842581

ABSTRACT

Animals rely on learned associations to make decisions. Associations can be based on relationships between object features (e.g., the three leaflets of poison ivy leaves) and outcomes (e.g., rash). More often, outcomes are linked to multidimensional states (e.g., poison ivy is green in summer but red in spring). Feature-based reinforcement learning fails when the values of individual features depend on the other features present. One solution is to assign value to multi-featural conjunctive representations. Here, we test if the hippocampus forms separable conjunctive representations that enables the learning of response contingencies for stimuli of the form: AB+, B-, AC-, C+. Pattern analyses on functional MRI data show the hippocampus forms conjunctive representations that are dissociable from feature components and that these representations, along with those of cortex, influence striatal prediction errors. Our results establish a novel role for hippocampal pattern separation and conjunctive representation in reinforcement learning.


Subject(s)
Association Learning/physiology , Conditioning, Classical/physiology , Corpus Striatum/physiology , Hippocampus/physiology , Reinforcement, Psychology , Adult , Brain Mapping/methods , Corpus Striatum/diagnostic imaging , Female , Hippocampus/diagnostic imaging , Humans , Linear Models , Magnetic Resonance Imaging , Male , Models, Neurological , Young Adult
8.
J Neurosci Methods ; 317: 37-44, 2019 04 01.
Article in English | MEDLINE | ID: mdl-30664916

ABSTRACT

BACKGROUND: Reinforcement learning models provide excellent descriptions of learning in multiple species across a variety of tasks. Many researchers are interested in relating parameters of reinforcement learning models to neural measures, psychological variables or experimental manipulations. We demonstrate that parameter identification is difficult because a range of parameter values provide approximately equal quality fits to data. This identification problem has a large impact on power: we show that a researcher who wants to detect a medium sized correlation (r = .3) with 80% power between a variable and learning rate must collect 60% more subjects than specified by a typical power analysis in order to account for the noise introduced by model fitting. NEW METHOD: We derive a Bayesian optimal model fitting technique that takes advantage of information contained in choices and reaction times to constrain parameter estimates. RESULTS: We show using simulation and empirical data that this method substantially improves the ability to recover learning rates. COMPARISON WITH EXISTING METHODS: We compare this method against the use of Bayesian priors. We show in simulations that the combined use of Bayesian priors and reaction times confers the highest parameter identifiability. However, in real data where the priors may have been misspecified, the use of Bayesian priors interferes with the ability of reaction time data to improve parameter identifiability. CONCLUSIONS: We present a simple technique that takes advantage of readily available data to substantially improve the quality of inferences that can be drawn from parameters of reinforcement learning models.


Subject(s)
Choice Behavior , Models, Neurological , Models, Psychological , Reaction Time , Reinforcement, Psychology , Animals , Bayes Theorem , Humans
9.
Sci Rep ; 8(1): 16545, 2018 11 08.
Article in English | MEDLINE | ID: mdl-30410093

ABSTRACT

Impulsivity refers to the tendency to insufficiently consider alternatives or to overvalue rewards that are available immediately. Impulsivity is a hallmark of human decision making with well documented health and financial ramifications. Numerous contextual changes and framing manipulations powerfully influence impulsivity. One of the most robust such phenomenon is the finding that people are more patient as the values of choice options are increased. This magnitude effect has been related to cognitive control mechanisms in the dorsal lateral prefrontal cortex (dlPFC). We used repetitive transcranial magnetic stimulation (rTMS) to transiently disrupt dlPFC neural activity. This manipulation dramatically reduced the magnitude effect, establishing causal evidence that the magnitude effect depends on dlPFC.


Subject(s)
Impulsive Behavior/physiology , Prefrontal Cortex/physiology , Transcranial Magnetic Stimulation/adverse effects , Adult , Decision Making , Female , Functional Laterality/physiology , Humans , Male , Young Adult
10.
Psychol Sci ; 28(10): 1443-1454, 2017 Oct.
Article in English | MEDLINE | ID: mdl-28858559

ABSTRACT

Impulsivity is a variable behavioral trait that depends on numerous factors. For example, increasing the absolute magnitude of available choice options promotes farsighted decisions. We argue that this magnitude effect arises in part from differential exertion of self-control as the perceived importance of the choice increases. First, we demonstrated that frontal executive-control areas were more engaged for more difficult decisions and that this effect was enhanced for high-magnitude rewards. Second, we showed that increased hunger, which is associated with lower self-control, reduced the magnitude effect. Third, we tested an intervention designed to increase self-control and showed that it reduced the magnitude effect. Taken together, our findings challenge existing theories about the magnitude effect and suggest that visceral and cognitive factors affecting choice may do so by influencing self-control.


Subject(s)
Brain Mapping/methods , Delay Discounting/physiology , Prefrontal Cortex/physiology , Reward , Self-Control/psychology , Adult , Female , Humans , Magnetic Resonance Imaging , Male , Prefrontal Cortex/diagnostic imaging
11.
J Cogn Neurosci ; 29(5): 793-804, 2017 May.
Article in English | MEDLINE | ID: mdl-28129051

ABSTRACT

Preferences for novel stimuli tend to develop slowly over many exposures. Psychological accounts of this effect suggest that it depends on changes in the brain's valuation system. Participants consumed a novel fluid daily for 10 days and underwent fMRI on the first and last days. We hypothesized that changes in activation in areas associated with the dopamine system would accompany changes in preference. The change in activation in the ventral tegmental area (VTA) between sessions scaled with preference change. Furthermore, a network comprising the sensory thalamus, posterior insula, and ventrolateral striatum showed differential connectivity with the VTA that correlated with individual changes in preference. Our results suggest that the VTA is centrally involved in both assigning value to sensory stimuli and influencing downstream regions to translate these value signals into subjective preference. These results have important implications for models of dopaminergic function and behavioral addiction.


Subject(s)
Brain Mapping/methods , Choice Behavior/physiology , Taste Perception/physiology , Ventral Tegmental Area/physiology , Adult , Female , Humans , Magnetic Resonance Imaging , Male , Ventral Tegmental Area/diagnostic imaging , Young Adult
12.
Cereb Cortex ; 27(2): 1660-1669, 2017 02 01.
Article in English | MEDLINE | ID: mdl-26826101

ABSTRACT

The mesolimbic dopamine system contributes to a remarkable variety of behaviors at multiple timescales. Midbrain neurons have fast and slow signaling components, and specific afferent systems, such as the hippocampus (HPC) and prefrontal cortex (PFC), have been demonstrated to drive these components in anesthetized animals. Whether these interactions exist during behavior, however, is unknown. To address this question, we developed a novel analysis of human functional magnetic resonance imaging data that fits models of network excitation and inhibition on ventral tegmental area (VTA) activation. We show that specific afferent systems predict distinct temporal components of midbrain VTA signal. We found that PFC, but not HPC, positively predicted transient, event-evoked VTA activation. In contrast, HPC, but not PFC, positively predicted slow shifts in VTA baseline variability. Thus, unique functional contributions of afferent systems to VTA physiology are detectable at the network level in behaving humans. The findings support models of dopamine function in which dissociable neural circuits support different aspects of motivated behavior via active regulation of tonic and phasic signals.


Subject(s)
Hippocampus/physiology , Nucleus Accumbens/physiology , Prefrontal Cortex/physiology , Ventral Tegmental Area/physiology , Adult , Dopamine/metabolism , Dopaminergic Neurons/metabolism , Female , Humans , Male , Neural Pathways/physiology , Young Adult
13.
Learn Mem ; 20(4): 229-35, 2013 Mar 19.
Article in English | MEDLINE | ID: mdl-23512939

ABSTRACT

Novelty detection, a critical computation within the medial temporal lobe (MTL) memory system, necessarily depends on prior experience. The current study used functional magnetic resonance imaging (fMRI) in humans to investigate dynamic changes in MTL activation and functional connectivity as experience with novelty accumulates. fMRI data were collected during a target detection task: Participants monitored a series of trial-unique novel and familiar scene images to detect a repeating target scene. Even though novel images themselves did not repeat, we found that fMRI activations in the hippocampus and surrounding cortical MTL showed a specific, decrementing response with accumulating exposure to novelty. The significant linear decrement occurred for the novel but not the familiar images, and behavioral measures ruled out a corresponding decline in vigilance. Additionally, early in the series, the hippocampus was inversely coupled with the dorsal striatum, lateral and medial prefrontal cortex, and posterior visual processing regions; this inverse coupling also habituated as novelty accumulated. This novel demonstration of a dynamic adjustment in neural responses to novelty suggests a similarly dynamic allocation of neural resources based on recent experience.


Subject(s)
Brain Mapping , Exploratory Behavior/physiology , Habituation, Psychophysiologic/physiology , Hippocampus/physiology , Nerve Net/physiology , Recognition, Psychology/physiology , Adult , Analysis of Variance , Female , Hippocampus/blood supply , Humans , Image Processing, Computer-Assisted , Magnetic Resonance Imaging , Male , Models, Statistical , Nerve Net/blood supply , Oxygen/blood , Reaction Time/physiology , Temporal Lobe/blood supply , Temporal Lobe/physiology , Young Adult
14.
Front Neurosci ; 5: 126, 2011.
Article in English | MEDLINE | ID: mdl-22110424

ABSTRACT

Intertemporal choices are a ubiquitous class of decisions that involve selecting between outcomes available at different times in the future. We investigated the neural systems supporting intertemporal decisions in healthy younger and older adults. Using functional neuroimaging, we find that aging is associated with a shift in the brain areas that respond to delayed rewards. Although we replicate findings that brain regions associated with the mesolimbic dopamine system respond preferentially to immediate rewards, we find a separate region in the ventral striatum with very modest time dependence in older adults. Activation in this striatal region was relatively insensitive to delay in older but not younger adults. Since the dopamine system is believed to support associative learning about future rewards over time, our observed transfer of function may be due to greater experience with delayed rewards as people age. Identifying differences in the neural systems underlying these decisions may contribute to a more comprehensive model of age-related change in intertemporal choice.

15.
J Neurosci ; 31(28): 10340-6, 2011 Jul 13.
Article in English | MEDLINE | ID: mdl-21753011

ABSTRACT

How does the brain translate information signaling potential rewards into motivation to get them? Motivation to obtain reward is thought to depend on the midbrain [particularly the ventral tegmental area (VTA)], the nucleus accumbens (NAcc), and the dorsolateral prefrontal cortex (dlPFC), but it is not clear how the interactions among these regions relate to reward-motivated behavior. To study the influence of motivation on these reward-responsive regions and on their interactions, we used dynamic causal modeling to analyze functional magnetic resonance imaging (fMRI) data from humans performing a simple task designed to isolate reward anticipation. The use of fMRI permitted the simultaneous measurement of multiple brain regions while human participants anticipated and prepared for opportunities to obtain reward, thus allowing characterization of how information about reward changes physiology underlying motivational drive. Furthermore, we modeled the impact of external reward cues on causal relationships within this network, thus elaborating a link between physiology, connectivity, and motivation. Specifically, our results indicated that dlPFC was the exclusive entry point of information about reward in this network, and that anticipated reward availability caused VTA activation only via its effect on the dlPFC. Anticipated reward thus increased dlPFC activation directly, whereas it influenced VTA and NAcc only indirectly, by enhancing intrinsically weak or inactive pathways from the dlPFC. Our findings of a directional prefrontal influence on dopaminergic regions during reward anticipation suggest a model in which the dlPFC integrates and transmits representations of reward to the mesolimbic and mesocortical dopamine systems, thereby initiating motivated behavior.


Subject(s)
Dopamine/metabolism , Motivation , Neurons/metabolism , Nucleus Accumbens/physiology , Prefrontal Cortex/physiology , Reward , Ventral Tegmental Area/physiology , Adult , Brain Mapping , Female , Humans , Image Processing, Computer-Assisted , Magnetic Resonance Imaging , Male , Neural Pathways/physiology , Nucleus Accumbens/metabolism , Prefrontal Cortex/metabolism , Ventral Tegmental Area/metabolism
SELECTION OF CITATIONS
SEARCH DETAIL
...