Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 10 de 10
Filter
Add more filters










Publication year range
1.
Nat Hum Behav ; 3(11): 1215-1224, 2019 11.
Article in English | MEDLINE | ID: mdl-31501543

ABSTRACT

A fundamental but rarely contested assumption in economics and neuroeconomics is that decision-makers compute subjective values of risky options by multiplying functions of reward probability and magnitude. By contrast, an additive strategy for valuation allows flexible combination of reward information required in uncertain or changing environments. We hypothesized that the level of uncertainty in the reward environment should determine the strategy used for valuation and choice. To test this hypothesis, we examined choice between risky options in humans and rhesus macaques across three tasks with different levels of uncertainty. We found that whereas humans and monkeys adopted a multiplicative strategy under risk when probabilities are known, both species spontaneously adopted an additive strategy under uncertainty when probabilities must be learned. Additionally, the level of volatility influenced relative weighting of certain and uncertain reward information, and this was reflected in the encoding of reward magnitude by neurons in the dorsolateral prefrontal cortex.


Subject(s)
Comprehension , Reward , Uncertainty , Adolescent , Animals , Choice Behavior , Decision Making , Female , Humans , Macaca mulatta/psychology , Male , Probability , Risk , Young Adult
2.
Elife ; 72018 10 08.
Article in English | MEDLINE | ID: mdl-30295606

ABSTRACT

Reinforcement has long been thought to require striatal synaptic plasticity. Indeed, direct striatal manipulations such as self-stimulation of direct-pathway projection neurons (dMSNs) are sufficient to induce reinforcement within minutes. However, it's unclear what role, if any, is played by downstream circuitry. Here, we used dMSN self-stimulation in mice as a model for striatum-driven reinforcement and mapped the underlying circuitry across multiple basal ganglia nuclei and output targets. We found that mimicking the effects of dMSN activation on downstream circuitry, through optogenetic suppression of basal ganglia output nucleus substantia nigra reticulata (SNr) or activation of SNr targets in the brainstem or thalamus, was also sufficient to drive rapid reinforcement. Remarkably, silencing motor thalamus-but not other selected targets of SNr-was the only manipulation that reduced dMSN-driven reinforcement. Together, these results point to an unexpected role for basal ganglia output to motor thalamus in striatum-driven reinforcement.


Subject(s)
Motor Activity/physiology , Neostriatum/physiology , Reinforcement, Psychology , Thalamus/physiology , Animals , Basal Ganglia/physiology , Electric Stimulation , Female , Glutamates/metabolism , Male , Mice , Optogenetics , Receptors, N-Methyl-D-Aspartate/metabolism , Serotonergic Neurons/metabolism , Synaptic Transmission/physiology
3.
Neuron ; 99(3): 598-608.e4, 2018 08 08.
Article in English | MEDLINE | ID: mdl-30033151

ABSTRACT

Adaptation of learning and decision-making might depend on the regulation of activity in the prefrontal cortex. Here we examined how volatility of reward probabilities influences learning and neural activity in the primate prefrontal cortex. We found that animals selected recently rewarded targets more often when reward probabilities of different options fluctuated across trials than when they were fixed. Additionally, neurons in the orbitofrontal cortex displayed more sustained activity related to the outcomes of their previous choices when reward probabilities changed over time. Such volatility also enhanced signals in the dorsolateral prefrontal cortex related to the current but not the previous location of the previously rewarded target. These results suggest that prefrontal activity related to choice and reward is dynamically regulated by the volatility of the environment and underscore the role of the prefrontal cortex in identifying aspects of the environment that are responsible for previous outcomes and should be learned.


Subject(s)
Decision Making/physiology , Learning/physiology , Prefrontal Cortex/physiology , Reward , Animals , Macaca mulatta , Male
4.
Neuron ; 94(2): 401-414.e6, 2017 Apr 19.
Article in English | MEDLINE | ID: mdl-28426971

ABSTRACT

Value-based decision making often involves integration of reward outcomes over time, but this becomes considerably more challenging if reward assignments on alternative options are probabilistic and non-stationary. Despite the existence of various models for optimally integrating reward under uncertainty, the underlying neural mechanisms are still unknown. Here we propose that reward-dependent metaplasticity (RDMP) can provide a plausible mechanism for both integration of reward under uncertainty and estimation of uncertainty itself. We show that a model based on RDMP can robustly perform the probabilistic reversal learning task via dynamic adjustment of learning based on reward feedback, while changes in its activity signal unexpected uncertainty. The model predicts time-dependent and choice-specific learning rates that strongly depend on reward history. Key predictions from this model were confirmed with behavioral data from non-human primates. Overall, our results suggest that metaplasticity can provide a neural substrate for adaptive learning and choice under uncertainty.


Subject(s)
Adaptation, Psychological/physiology , Brain/physiology , Choice Behavior/physiology , Reversal Learning/physiology , Uncertainty , Animals , Behavior, Animal , Macaca mulatta , Male , Neuronal Plasticity
5.
Neuron ; 88(2): 240-1, 2015 Oct 21.
Article in English | MEDLINE | ID: mdl-26494272

ABSTRACT

In this issue of Neuron, Sippy et al. (2015) provide the clearest evidence to date that information is differentially encoded in the direct and indirect pathways of the striatum. The results support the classical notion that the direct pathway plays a critical role in initiating actions.


Subject(s)
Corpus Striatum/cytology , Corpus Striatum/physiology , Goals , Neurons/physiology , Psychomotor Performance/physiology , Reward , Animals
6.
Nat Neurosci ; 18(2): 295-301, 2015 Feb.
Article in English | MEDLINE | ID: mdl-25581364

ABSTRACT

Neurons in the dorsolateral prefrontal cortex (DLPFC) encode a diverse array of sensory and mnemonic signals, but little is known about how this information is dynamically routed during decision making. We analyzed the neuronal activity in the DLPFC of monkeys performing a probabilistic reversal task where information about the probability and magnitude of reward was provided by the target color and numerical cues, respectively. The location of the target of a given color was randomized across trials and therefore was not relevant for subsequent choices. DLPFC neurons encoded signals related to both task-relevant and irrelevant features, but only task-relevant mnemonic signals were encoded congruently with choice signals. Furthermore, only the task-relevant signals related to previous events were more robustly encoded following rewarded outcomes. Thus, multiple types of neural signals are flexibly routed in the DLPFC so as to favor actions that maximize reward.


Subject(s)
Choice Behavior/physiology , Memory, Short-Term/physiology , Prefrontal Cortex/physiology , Reward , Animals , Behavior, Animal/physiology , Macaca mulatta , Male , Prefrontal Cortex/cytology , Probability Learning , Psychomotor Performance/physiology , Visual Perception/physiology
7.
Science ; 346(6207): 340-3, 2014 Oct 17.
Article in English | MEDLINE | ID: mdl-25236468

ABSTRACT

Although human and animal behaviors are largely shaped by reinforcement and punishment, choices in social settings are also influenced by information about the knowledge and experience of other decision-makers. During competitive games, monkeys increased their payoffs by systematically deviating from a simple heuristic learning algorithm and thereby countering the predictable exploitation by their computer opponent. Neurons in the dorsomedial prefrontal cortex (dmPFC) signaled the animal's recent choice and reward history that reflected the computer's exploitative strategy. The strength of switching signals in the dmPFC also correlated with the animal's tendency to deviate from the heuristic learning algorithm. Therefore, the dmPFC might provide control signals for overriding simple heuristic learning algorithms based on the inferred strategies of the opponent.


Subject(s)
Competitive Behavior , Games, Experimental , Learning/physiology , Neurons/physiology , Prefrontal Cortex/physiology , Algorithms , Animals , Decision Making , Macaca mulatta , Prefrontal Cortex/cytology , Reward , Video Games
8.
Neuron ; 80(1): 223-34, 2013 Oct 02.
Article in English | MEDLINE | ID: mdl-24012280

ABSTRACT

In stable environments, decision makers can exploit their previously learned strategies for optimal outcomes, while exploration might lead to better options in unstable environments. Here, to investigate the cortical contributions to exploratory behavior, we analyzed single-neuron activity recorded from four different cortical areas of monkeys performing a matching-pennies task and a visual search task, which encouraged and discouraged exploration, respectively. We found that neurons in multiple regions in the frontal and parietal cortex tended to encode signals related to previously rewarded actions more reliably than unrewarded actions. In addition, signals for rewarded choices in the supplementary eye field were attenuated during the visual search task and were correlated with the tendency to switch choices during the matching-pennies task. These results suggest that the supplementary eye field might play a unique role in encouraging animals to explore alternative decision-making strategies.


Subject(s)
Motor Cortex/physiology , Neurons/physiology , Reward , Action Potentials/physiology , Animals , Behavior, Animal , Choice Behavior/physiology , Female , Macaca mulatta , Male , Psychomotor Performance/physiology , Saccades/physiology , Visual Fields/physiology
9.
J Neural Transm (Vienna) ; 117(2): 217-25, 2010 Feb.
Article in English | MEDLINE | ID: mdl-20013008

ABSTRACT

As a part of a larger study of normal aging and Alzheimer's disease (AD), which included patients with mild cognitive impairment (MCI), we investigated the response to median nerve stimulation in primary and secondary somatosensory areas. We hypothesized that the somatosensory response would be relatively spared given the reported late involvement of sensory areas in the progression of AD. We applied brief pulses of electric current to left and right median nerves to test the somatosensory response in normal elderly (NE), MCI, and AD. MEG responses were measured and were analyzed with a semi-automated source localization algorithm to characterize source locations and timecourses. We found an overall difference in the amplitude of the response of the primary somatosensory source (SI) based on diagnosis. Across the first three peaks of the SI response, the MCI patients exhibited a larger amplitude response than the NE and AD groups (P < 0.03). Additional relationships between neuropsychological measures and SI amplitude were also determined. There was no significant difference in amplitude for the contralateral secondary somatosensory source across diagnostic category. These results suggest that somatosensory cortex is affected early in the progression of AD and may have some consequence on behavioral and functional measures.


Subject(s)
Aging/physiology , Alzheimer Disease/physiopathology , Cognition Disorders/physiopathology , Somatosensory Cortex/physiopathology , Touch Perception/physiology , Aged , Aged, 80 and over , Algorithms , Automation , Electric Stimulation , Evoked Potentials, Somatosensory , Female , Humans , Magnetoencephalography , Male , Median Nerve/physiopathology , Middle Aged , Neuropsychological Tests , Signal Processing, Computer-Assisted , Time Factors
SELECTION OF CITATIONS
SEARCH DETAIL
...