Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 19 de 19
Filter
Add more filters










Publication year range
1.
Front Behav Neurosci ; 17: 1297293, 2023.
Article in English | MEDLINE | ID: mdl-38053922

ABSTRACT

Adolescence is a time of heightened risk-taking across species. Salient audiovisual cues associated with rewards are a common feature of gambling environments and have been connected to increased risky decision-making. We have previously shown that, in adult male rats, sign tracking - a behavioral measure of cue reactivity - predicts an individual's propensity for suboptimal risky choices in a rodent gambling task (rGT) with win-paired cues. However, adolescents perform less sign tracking than adult animals, suggesting that they are less cue-reactive than adults in some circumstances. Therefore, we investigated the performance of adolescent male rats on the rGT with win cues and examined its relationship with their sign-tracking behavior. We found that adolescents make more risky choices and fewer optimal choices on the rGT compared with adults, evidence of the validity of the rGT as a model of adolescent gambling behavior. We also confirmed that adolescents perform less sign tracking than adults, and we found that, unlike in adults, adolescents' sign tracking was unrelated to their risk-taking in the rGT. This implies that adolescent risk-taking is less likely than that of adults to be driven by reward-related cues. Finally, we found that adults trained on the rGT as adolescents retained an adolescent-like propensity toward risky choices, suggesting that early exposure to a gambling environment may have a long-lasting impact on risk-taking behavior.

2.
eNeuro ; 10(9)2023 09.
Article in English | MEDLINE | ID: mdl-37643864

ABSTRACT

When a Pavlovian cue is presented separately from its associated reward, some animals will acquire a sign tracking (ST) response - approach and/or interaction with the cue - while others will acquire a goal tracking response - approach to the site of reward. We have previously shown that cue-evoked excitations in the nucleus accumbens (NAc) encode the vigor of both behaviors; in contrast, reward-related responses diverge over the course of training, possibly reflecting neurochemical differences between sign tracker and goal tracker individuals. However, a substantial subset of neurons in the NAc exhibit inhibitory, rather than excitatory, cue-evoked responses, and the evolution of their signaling during Pavlovian conditioning remains unknown. Using single-neuron recordings in behaving rats, we show that NAc neurons with cue-evoked inhibitions have distinct coding properties from neurons with cue-evoked excitations. Cue-evoked inhibitions become more numerous over the course of training and, like excitations, may encode the vigor of sign tracking and goal tracking behavior. However, the responses of cue-inhibited neurons do not evolve differently between sign tracker and goal tracker individuals. Moreover, cue-evoked inhibitions, unlike excitations, are insensitive to extinction of the cue-reward relationship. Finally, we show that cue-evoked excitations are greatly diminished by reward devaluation, while inhibitory cue responses are virtually unaffected. Overall, these findings converge with existing evidence that cue-excited neurons in NAc, but not cue-inhibited neurons, are profoundly sensitive to the same behavior variations that are often associated with changes in dopamine release.


Subject(s)
Goals , Nucleus Accumbens , Animals , Rats , Conditioning, Classical , Dopamine , Inhibition, Psychological
3.
Psychopharmacology (Berl) ; 238(9): 2645-2660, 2021 Sep.
Article in English | MEDLINE | ID: mdl-34191111

ABSTRACT

RATIONALE: Reward-associated cues can promote maladaptive behavior, including risky decision-making in a gambling setting. A propensity for sign tracking over goal tracking-i.e., interaction with a reward-predictive cue rather than the site of reward-demonstrates an individual's tendency to transfer motivational value to a cue. However, the relationship of sign tracking to risky decision-making remains unclear. OBJECTIVES: To determine whether sign tracking predicts risky choice, we used a Pavlovian conditioned approach task to evaluate the tendency of male rats to sign track to a lever cue and then trained rats on a rodent gambling task (rGT) with win-associated cues. We also tested the effects of D-amphetamine, quinpirole (a D2/D3 receptor agonist), and PD128907 (a D3 receptor agonist) on gambling behavior in sign tracker and goal tracker individuals. RESULTS: Increased sign tracking relative to goal tracking was associated with suboptimal performance on the rGT, including decreased selection of the optimal choice, increased selection of a high-risk/high-reward option, and increased impulsive premature choices. Amphetamine increased choices of a low-risk/low-reward option at the expense of optimal and high-risk choices, whereas quinpirole and PD128907 had little effect on choice allocation, but reduced impulsivity. Drug effects were similar across sign tracker and goal tracker individuals. CONCLUSIONS: Cue reactivity, as measured by sign tracking, is predictive and may be an important driver of risky and impulsive choices in a gambling setting laden with salient audiovisual cues. Evaluating an individual's sign tracking behavior may be an avenue to predict vulnerability to pathological gambling and the efficacy of treatments.


Subject(s)
Gambling , Animals , Choice Behavior , Cues , Impulsive Behavior , Male , Rats , Reward , Rodentia
4.
eNeuro ; 6(2)2019.
Article in English | MEDLINE | ID: mdl-30886890

ABSTRACT

During Pavlovian conditioning, if a cue (e.g., lever extension) predicts reward delivery in a different location (e.g., a food magazine), some individuals will come to approach and interact with the cue, a behavior known as sign tracking (ST), and others will approach the site of reward, a behavior known as goal tracking (GT). In rats, the acquisition of ST versus GT behavior is associated with distinct profiles of dopamine release in the nucleus accumbens (NAc), but it is unknown whether it is associated with different patterns of accumbens neural activity. Therefore, we recorded from individual neurons in the NAc core during the acquisition, maintenance, and extinction of ST and GT behavior. Even though NAc dopamine is specifically important for the acquisition and expression of ST, we found that cue-evoked excitatory responses encode the vigor of both ST and GT behavior. In contrast, among sign trackers only, there was a prominent decrease in reward-related activity over the course of training, which may reflect the decreasing reward prediction error encoded by phasic dopamine. Finally, both behavior and cue-evoked activity were relatively resistant to extinction in sign trackers, as compared with goal trackers, although a subset of neurons in both groups retained their cue-evoked responses. Overall, the results point to the convergence of multiple forms of reward learning in the NAc.


Subject(s)
Behavior, Animal/physiology , Cues , Goals , Learning/physiology , Nucleus Accumbens/physiology , Animals , Conditioning, Classical , Male , Rats , Rats, Long-Evans , Reward
5.
Front Behav Neurosci ; 13: 291, 2019.
Article in English | MEDLINE | ID: mdl-31992975

ABSTRACT

When a cue is paired with reward in a different location, some animals will approach the site of reward during the cue, a behavior called goal tracking, while other animals will approach and interact with the cue itself: a behavior called sign tracking. Sign tracking is thought to reflect a tendency to transfer incentive salience from the reward to the cue. Adolescence is a time of heightened sensitivity to rewards, including environmental cues that have been associated with rewards, which may account for increased impulsivity and vulnerability to drug abuse. Surprisingly, however, studies have shown that adolescents are actually less likely to interact with the cue (i.e., sign track) than adult animals. We reasoned that adolescents might show decreased sign tracking, accompanied by increased apparent goal tracking, because they tend to attribute incentive salience to a more reward-proximal "cue": the food magazine. On the other hand, adolescence is also a time of enhanced exploratory behavior, novelty-seeking, and behavioral flexibility. Therefore, adolescents might truly express more goal-directed reward-seeking and less inflexible habit-like approach to a reward-associated cue. Using a reward devaluation procedure to distinguish between these two hypotheses, we found that adolescents indeed exhibit more goal tracking, and less sign tracking, than a comparable group of adults. Moreover, adolescents' goal tracking behavior is highly sensitive to reward devaluation and therefore goal-directed and not habit-like.

6.
J Neurophysiol ; 118(5): 2549-2567, 2017 11 01.
Article in English | MEDLINE | ID: mdl-28794196

ABSTRACT

The nucleus accumbens (NAc) has often been described as a "limbic-motor interface," implying that the NAc integrates the value of expected rewards with the motor planning required to obtain them. However, there is little direct evidence that the signaling of individual NAc neurons combines information about predicted reward and behavioral response. We report that cue-evoked neural responses in the NAc form a likely physiological substrate for its limbic-motor integration function. Across task contexts, individual NAc neurons in behaving rats robustly encode the reward-predictive qualities of a cue, as well as the probability of behavioral response to the cue, as coexisting components of the neural signal. In addition, cue-evoked activity encodes spatial and locomotor aspects of the behavioral response, including proximity to a reward-associated target and the latency and speed of approach to the target. Notably, there are important limits to the ability of NAc neurons to integrate motivational information into behavior: in particular, updating of predicted reward value appears to occur on a relatively long timescale, since NAc neurons fail to discriminate between cues with reward associations that change frequently. Overall, these findings suggest that NAc cue-evoked signals, including inhibition of firing (as noted here for the first time), provide a mechanism for linking reward prediction and other motivationally relevant factors, such as spatial proximity, to the probability and vigor of a reward-seeking behavioral response.NEW & NOTEWORTHY The nucleus accumbens (NAc) is thought to link expected rewards and action planning, but evidence for this idea remains sparse. We show that, across contexts, both excitatory and inhibitory cue-evoked activity in the NAc jointly encode reward prediction and probability of behavioral responding to the cue, as well as spatial and locomotor properties of the response. Interestingly, although spatial information in the NAc is updated quickly, fine-grained updating of reward value occurs over a longer timescale.


Subject(s)
Evoked Potentials, Motor , Limbic System/physiology , Neural Inhibition , Nucleus Accumbens/physiology , Animals , Cues , Limbic System/cytology , Male , Neurons/physiology , Nucleus Accumbens/cytology , Rats , Rats, Long-Evans , Reaction Time , Reward
7.
Front Neurosci ; 9: 468, 2015.
Article in English | MEDLINE | ID: mdl-26733783

ABSTRACT

During Pavlovian conditioning, a conditioned stimulus (CS) may act as a predictor of a reward to be delivered in another location. Individuals vary widely in their propensity to engage with the CS (sign tracking) or with the site of eventual reward (goal tracking). It is often assumed that sign tracking involves the association of the CS with the motivational value of the reward, resulting in the CS acquiring incentive value independent of the outcome. However, experimental evidence for this assumption is lacking. In order to test the hypothesis that sign tracking behavior does not rely on a neural representation of the outcome, we employed a reward devaluation procedure. We trained rats on a classic Pavlovian paradigm in which a lever CS was paired with a sucrose reward, then devalued the reward by pairing sucrose with illness in the absence of the CS. We found that sign tracking behavior was enhanced, rather than diminished, following reward devaluation; thus, sign tracking is clearly independent of a representation of the outcome. In contrast, goal tracking behavior was decreased by reward devaluation. Furthermore, when we divided rats into those with high propensity to engage with the lever (sign trackers) and low propensity to engage with the lever (goal trackers), we found that nearly all of the effects of devaluation could be attributed to the goal trackers. These results show that sign tracking and goal tracking behavior may be the output of different associative structures in the brain, providing insight into the mechanisms by which reward-associated stimuli-such as drug cues-come to exert control over behavior in some individuals.

8.
J Neurosci ; 34(42): 14147-62, 2014 Oct 15.
Article in English | MEDLINE | ID: mdl-25319709

ABSTRACT

Both animals and humans often prefer rewarding options that are nearby over those that are distant, but the neural mechanisms underlying this bias are unclear. Here we present evidence that a proximity signal encoded by neurons in the nucleus accumbens drives proximate reward bias by promoting impulsive approach to nearby reward-associated objects. On a novel decision-making task, rats chose the nearer option even when it resulted in greater effort expenditure and delay to reward; therefore, proximate reward bias was unlikely to be caused by effort or delay discounting. The activity of individual neurons in the nucleus accumbens did not consistently encode the reward or effort associated with specific alternatives, suggesting that it does not participate in weighing the values of options. In contrast, proximity encoding was consistent and did not depend on the subsequent choice, implying that accumbens activity drives approach to the nearest rewarding option regardless of its specific associated reward size or effort level.


Subject(s)
Choice Behavior/physiology , Decision Making/physiology , Neurons/physiology , Nucleus Accumbens/physiology , Reward , Spatial Behavior/physiology , Animals , Male , Rats , Rats, Long-Evans
9.
J Neurosci ; 33(2): 722-33, 2013 Jan 09.
Article in English | MEDLINE | ID: mdl-23303950

ABSTRACT

Recent electrophysiological studies on the primate amygdala have advanced our understanding of how individual neurons encode information relevant to emotional processes, but it remains unclear how these neurons are functionally and anatomically organized. To address this, we analyzed cross-correlograms of amygdala spike trains recorded during a task in which monkeys learned to associate novel images with rewarding and aversive outcomes. Using this task, we have recently described two populations of amygdala neurons: one that responds more strongly to images predicting reward (positive value-coding), and another that responds more strongly to images predicting an aversive stimulus (negative value-coding). Here, we report that these neural populations are organized into distinct, but anatomically intermingled, appetitive and aversive functional circuits, which are dynamically modulated as animals used the images to predict outcomes. Furthermore, we report that responses to sensory stimuli are prevalent in the lateral amygdala, and are also prevalent in the medial amygdala for sensory stimuli that are emotionally significant. The circuits identified here could potentially mediate valence-specific emotional behaviors thought to involve the amygdala.


Subject(s)
Amygdala/anatomy & histology , Amygdala/physiology , Nerve Net/anatomy & histology , Nerve Net/physiology , Animals , Behavior, Animal/physiology , Conditioning, Operant/drug effects , Conditioning, Operant/physiology , Emotions/physiology , Fixation, Ocular , Macaca mulatta , Male , Neurons/physiology , Photic Stimulation , Reinforcement, Psychology , Reward , Sensation/physiology
10.
Front Neurosci ; 6: 170, 2012.
Article in English | MEDLINE | ID: mdl-23189037

ABSTRACT

Decision-making often involves using sensory cues to predict possible rewarding or punishing reinforcement outcomes before selecting a course of action. Recent work has revealed complexity in how the brain learns to predict rewards and punishments. Analysis of neural signaling during and after learning in the amygdala and orbitofrontal cortex, two brain areas that process appetitive and aversive stimuli, reveals a dynamic relationship between appetitive and aversive circuits. Specifically, the relationship between signaling in appetitive and aversive circuits in these areas shifts as a function of learning. Furthermore, although appetitive and aversive circuits may often drive opposite behaviors - approaching or avoiding reinforcement depending upon its valence - these circuits can also drive similar behaviors, such as enhanced arousal or attention; these processes also may influence choice behavior. These data highlight the formidable challenges ahead in dissecting how appetitive and aversive neural circuits interact to produce a complex and nuanced range of behaviors.

11.
Ann N Y Acad Sci ; 1239: 59-70, 2011 Dec.
Article in English | MEDLINE | ID: mdl-22145876

ABSTRACT

Individuals weigh information about both rewarding and aversive stimuli to make adaptive decisions. Most studies of the orbitofrontal cortex (OFC), an area where appetitive and aversive neural subsystems might interact, have focused only on reward. Using a classical conditioning task where novel stimuli are paired with a reward or an aversive air puff, we discovered that two groups of orbitofrontal neurons respond preferentially to conditioned stimuli associated with rewarding and aversive outcomes; however, information about appetitive and aversive stimuli converges on individual neurons from both populations. Therefore, neurons in the OFC might participate in appetitive and aversive networks that track the motivational significance of stimuli even when they vary in valence and sensory modality. Further, we show that these networks, which also extend to the amygdala, exhibit different rates of change during reversal learning. Thus, although both networks represent appetitive and aversive associations, their distinct temporal dynamics might indicate different roles in learning processes.


Subject(s)
Appetitive Behavior , Avoidance Learning , Brain Mapping , Frontal Lobe/physiology , Neurons/physiology , Animals , Behavior, Animal , Decision Making , Humans , Models, Psychological , Physiology, Comparative/methods , Primates , Reinforcement, Psychology , Reversal Learning , Social Values
12.
Neuron ; 71(6): 1127-40, 2011 Sep 22.
Article in English | MEDLINE | ID: mdl-21943608

ABSTRACT

The orbitofrontal cortex (OFC) and amygdala are thought to participate in reversal learning, a process in which cue-outcome associations are switched. However, current theories disagree on whether OFC directs reversal learning in the amygdala. Here, we show that during reversal of cues' associations with rewarding and aversive outcomes, neurons that respond preferentially to stimuli predicting aversive events update more quickly in amygdala than OFC; meanwhile, OFC neurons that respond preferentially to reward-predicting stimuli update more quickly than those in the amygdala. After learning, however, OFC consistently differentiates between impending reinforcements with a shorter latency than the amygdala. Finally, analysis of local field potentials (LFPs) reveals a disproportionate influence of OFC on amygdala that emerges after learning. We propose that reversal learning is supported by complex interactions between neural circuits spanning the amygdala and OFC, rather than directed by any single structure.


Subject(s)
Amygdala/physiology , Learning/physiology , Prefrontal Cortex/physiology , Reversal Learning/physiology , Amygdala/anatomy & histology , Animals , Behavior, Animal/physiology , Cues , Electrophysiology , Haplorhini , Neurons/physiology , Neuropsychological Tests , Prefrontal Cortex/anatomy & histology , Reinforcement, Psychology , Time Factors
13.
Curr Opin Neurobiol ; 20(2): 221-30, 2010 Apr.
Article in English | MEDLINE | ID: mdl-20299204

ABSTRACT

Recent advances indicate that the amygdala represents valence: a general appetitive/aversive affective characteristic that bears similarity to the neuroeconomic concept of value. Neurophysiological studies show that individual amygdala neurons respond differentially to a range of stimuli with positive or negative affective significance. Meanwhile, increasingly specific lesion/inactivation studies reveal that the amygdala is necessary for processes--for example, fear extinction and reinforcer devaluation--that involve updating representations of value. Furthermore, recent neuroimaging studies suggest that the human amygdala mediates performance on many reward-based decision-making tasks. The encoding of affective significance by the amygdala might be best described as a representation of state value-a representation that is useful for coordinating physiological, behavioral, and cognitive responses in an affective/emotional context.


Subject(s)
Affect/physiology , Amygdala/physiology , Cognition/physiology , Emotions/physiology , Judgment/physiology , Amygdala/anatomy & histology , Animals , Decision Making/physiology , Fear/physiology , Humans , Models, Animal , Motivation , Reward
14.
Neuroimage ; 52(3): 833-47, 2010 Sep.
Article in English | MEDLINE | ID: mdl-20100580

ABSTRACT

Complex tasks often require the memory of recent events, the knowledge about the context in which they occur, and the goals we intend to reach. All this information is stored in our mental states. Given a set of mental states, reinforcement learning (RL) algorithms predict the optimal policy that maximizes future reward. RL algorithms assign a value to each already-known state so that discovering the optimal policy reduces to selecting the action leading to the state with the highest value. But how does the brain create representations of these mental states in the first place? We propose a mechanism for the creation of mental states that contain information about the temporal statistics of the events in a particular context. We suggest that the mental states are represented by stable patterns of reverberating activity, which are attractors of the neural dynamics. These representations are built from neurons that are selective to specific combinations of external events (e.g. sensory stimuli) and pre-existent mental states. Consistent with this notion, we find that neurons in the amygdala and in orbitofrontal cortex (OFC) often exhibit this form of mixed selectivity. We propose that activating different mixed selectivity neurons in a fixed temporal order modifies synaptic connections so that conjunctions of events and mental states merge into a single pattern of reverberating activity. This process corresponds to the birth of a new, different mental state that encodes a different temporal context. The concretion process depends on temporal contiguity, i.e. on the probability that a combination of an event and mental states follows or precedes the events and states that define a certain context. The information contained in the context thereby allows an animal to assign unambiguously a value to the events that initially appeared in different situations with different meanings.


Subject(s)
Brain/physiology , Cognition/physiology , Models, Neurological , Neural Networks, Computer , Neurons/physiology , Animals , Humans , Learning/physiology , Reinforcement, Psychology
15.
J Neurosci ; 29(37): 11471-83, 2009 Sep 16.
Article in English | MEDLINE | ID: mdl-19759296

ABSTRACT

Neuroscientists, psychologists, clinicians, and economists have long been interested in how individuals weigh information about potential rewarding and aversive stimuli to make decisions and to regulate their emotions. However, we know relatively little about how appetitive and aversive systems interact in the brain, as most prior studies have investigated only one valence of reinforcement. Previous work has suggested that primate orbitofrontal cortex (OFC) represents information about the reward value of stimuli. We therefore investigated whether OFC also represents information about aversive stimuli, and, if so, whether individual neurons process information about both rewarding and aversive stimuli. Monkeys performed a trace conditioning task in which different novel abstract visual stimuli (conditioned stimuli, CSs) predicted the occurrence of one of three unconditioned stimuli (USs): a large liquid reward, a small liquid reward, or an aversive air-puff. Three lines of evidence suggest that information about rewarding and aversive stimuli converges in individual neurons in OFC. First, OFC neurons often responded to both rewarding and aversive USs, despite their different sensory features. Second, OFC neural responses to CSs often encoded information about both potential rewarding and aversive stimuli, even though these stimuli differed in both valence and sensory modality. Finally, OFC neural responses were correlated with monkeys' behavioral use of information about both rewarding and aversive CS-US associations. These data indicate that processing of appetitive and aversive stimuli converges at the single cell level in OFC, providing a possible substrate for executive and emotional processes that require using information from both appetitive and aversive systems.


Subject(s)
Avoidance Learning/physiology , Discrimination Learning/physiology , Neurons/physiology , Reward , Action Potentials/physiology , Analysis of Variance , Animals , Appetitive Behavior , Behavior, Animal/physiology , Blinking/physiology , Conditioning, Classical/physiology , Conditioning, Operant/physiology , Haplorhini , Magnetic Resonance Imaging/methods , Neural Pathways/physiology , Photic Stimulation/methods , Physical Stimulation , Prefrontal Cortex/cytology , ROC Curve , Reaction Time/physiology , Reinforcement Schedule , Time Factors
16.
Ann N Y Acad Sci ; 1121: 336-54, 2007 Dec.
Article in English | MEDLINE | ID: mdl-17872400

ABSTRACT

The amygdala and orbitofrontal cortex (OFC) are often thought of as components of a neural circuit that assigns affective significance--or value--to sensory stimuli so as to anticipate future events and adjust behavioral and physiological responses. Much recent work has been aimed at understanding the distinct contributions of the amygdala and OFC to these processes, but a detailed understanding of the physiological mechanisms underlying learning about value remains lacking. To gain insight into these processes, we have focused initially on characterizing the neural signals of the primate amygdala, and more recently of the primate OFC, during appetitive and aversive reinforcement learning procedures. We have employed a classical conditioning procedure whereby monkeys form associations between visual stimuli and rewards or aversive stimuli. After learning these initial associations, we reverse the stimulus-reinforcement contingencies, and monkeys learn these new associations. We have discovered that separate populations of neurons in the amygdala represent the positive and negative value of conditioned visual stimuli. This representation of value updates rapidly upon image value reversal, as fast as monkeys learn, often within a single trial. We suggest that representations of value in the amygdala may change through multiple interrelated mechanisms: some that arise from fairly simple Hebbian processes, and others that may involve gated inputs from other brain areas, such as the OFC.


Subject(s)
Brain/physiology , Neurons/physiology , Primates/physiology , Animals , Emotions , Humans , Learning
17.
Neuron ; 55(6): 970-84, 2007 Sep 20.
Article in English | MEDLINE | ID: mdl-17880899

ABSTRACT

Animals and humans learn to approach and acquire pleasant stimuli and to avoid or defend against aversive ones. However, both pleasant and aversive stimuli can elicit arousal and attention, and their salience or intensity increases when they occur by surprise. Thus, adaptive behavior may require that neural circuits compute both stimulus valence--or value--and intensity. To explore how these computations may be implemented, we examined neural responses in the primate amygdala to unexpected reinforcement during learning. Many amygdala neurons responded differently to reinforcement depending upon whether or not it was expected. In some neurons, this modulation occurred only for rewards or aversive stimuli, but not both. In other neurons, expectation similarly modulated responses to both rewards and punishments. These different neuronal populations may subserve two sorts of processes mediated by the amygdala: those activated by surprising reinforcements of both valences-such as enhanced arousal and attention-and those that are valence-specific, such as fear or reward-seeking behavior.


Subject(s)
Amygdala/physiology , Learning/physiology , Neurons/physiology , Amygdala/cytology , Animals , Arousal , Data Interpretation, Statistical , Excitatory Postsynaptic Potentials/physiology , Macaca mulatta , Psychomotor Performance/physiology , ROC Curve , Reinforcement, Psychology
18.
Nature ; 439(7078): 865-70, 2006 Feb 16.
Article in English | MEDLINE | ID: mdl-16482160

ABSTRACT

Visual stimuli can acquire positive or negative value through their association with rewards and punishments, a process called reinforcement learning. Although we now know a great deal about how the brain analyses visual information, we know little about how visual representations become linked with values. To study this process, we turned to the amygdala, a brain structure implicated in reinforcement learning. We recorded the activity of individual amygdala neurons in monkeys while abstract images acquired either positive or negative value through conditioning. After monkeys had learned the initial associations, we reversed image value assignments. We examined neural responses in relation to these reversals in order to estimate the relative contribution to neural activity of the sensory properties of images and their conditioned values. Here we show that changes in the values of images modulate neural activity, and that this modulation occurs rapidly enough to account for, and correlates with, monkeys' learning. Furthermore, distinct populations of neurons encode the positive and negative values of visual stimuli. Behavioural and physiological responses to visual stimuli may therefore be based in part on the plastic representation of value provided by the amygdala.


Subject(s)
Amygdala/physiology , Learning/physiology , Macaca mulatta/physiology , Reinforcement, Psychology , Amygdala/cytology , Animals , Magnetic Resonance Imaging , Models, Neurological , Neurons/physiology , Photic Stimulation , Reward
19.
Biophys J ; 83(2): 880-98, 2002 Aug.
Article in English | MEDLINE | ID: mdl-12124271

ABSTRACT

We present an extensive set of measurements of proton conduction through gramicidin A (gA), B (gB), and M (gM) homodimer channels which have 4, 3, or 0 Trp residues at each end of the channel, respectively. In gA we find a shoulder separating two domains of conductance increasing with concentration, confirming the results of Eisenman, G., B. Enos, J. Hagglund, and J. Sandblom. 1980. Ann. NY. Acad. Sci. 339:8-20. In gB, the shoulder is shifted by approximately 1/2 pH unit to higher H(+) concentrations and is very sharply defined. No shoulder appears in the gM data, but an associated transition from sublinear to superlinear I-V values occurs at a 100-fold higher [H(+)] in gM than in gA. The data in the low concentration domain are analyzed using a configuration space model of single-proton conduction, assuming that the difference in the proton potential of mean force (PMF) between gA and its analogs is constant, similar to the results of Anderson, D., R. B. Shirts, T. A. Cross, and D. D. Busath. 2001. Biophys. J. 81:1255-1264. Our results suggest that the average amplitudes of the calculated proton PMFs are nearly correct, but that the water reorientation barrier calculated for gA by molecular dynamics using the PM6 water model (Pomès, R., and B. Roux. 1997. Biophys. J. 72:246a) must be reduced in amplitude by 1.5 kcal/mol or more, and is not rate-limiting for gA.


Subject(s)
Gramicidin/pharmacology , Tryptophan/chemistry , Biophysical Phenomena , Biophysics , Electrophysiology , Gramicidin/chemistry , Kinetics , Lipid Bilayers , Protons , Sensitivity and Specificity , Time Factors
SELECTION OF CITATIONS
SEARCH DETAIL
...