Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 5.980
Filter
1.
Curr Neuropharmacol ; 22(9): 1551-1565, 2024.
Article in English | MEDLINE | ID: mdl-38847144

ABSTRACT

BACKGROUND: The thalamus is a phylogenetically well-preserved structure. Known to densely contact cortical regions, its role in the transmission of sensory information to the striatal complex has been widely reconsidered in recent years. METHODS: The parafascicular nucleus of the thalamus (Pf) has been implicated in the orientation of attention toward salient sensory stimuli. In a stimulus-driven reward-seeking task, we sought to characterize the electrophysiological activity of Pf neurons in rats. RESULTS: We observed a predominance of excitatory over inhibitory responses for all events in the task. Neurons responded more strongly to the stimulus compared to lever-pressing and reward collecting, confirming the strong involvement of the Pf in sensory information processing. The use of long sessions allowed us to compare neuronal responses to stimuli between trials when animals were engaged in action and those when they were not. We distinguished two populations of neurons with opposite responses: MOTIV+ neurons responded more intensely to stimuli followed by a behavioral response than those that were not. Conversely, MOTIV- neurons responded more strongly when the animal did not respond to the stimulus. In addition, the latency of excitation of MOTIV- neurons was shorter than that of MOTIV+ neurons. CONCLUSION: Through this encoding, the Pf could perform an early selection of environmental stimuli transmitted to the striatum according to motivational level.


Subject(s)
Intralaminar Thalamic Nuclei , Neurons , Reward , Animals , Neurons/physiology , Male , Intralaminar Thalamic Nuclei/physiology , Rats , Rats, Wistar , Conditioning, Operant/physiology , Action Potentials/physiology
2.
PLoS One ; 19(5): e0301173, 2024.
Article in English | MEDLINE | ID: mdl-38771859

ABSTRACT

The following paper describes a steady-state model of concurrent choice, termed the active time model (ATM). ATM is derived from maximization principles and is characterized by a semi-Markov process. The model proposes that the controlling stimulus in concurrent variable-interval (VI) VI schedules of reinforcement is the time interval since the most recent response, termed here "the active interresponse time" or simply "active time." In the model after a response is generated, it is categorized by a function that relates active times to switch/stay probabilities. In the paper the output of ATM is compared with predictions made by three other models of operant conditioning: melioration, a version of scalar expectancy theory (SET), and momentary maximization. Data sets considered include preferences in multiple-concurrent VI VI schedules, molecular choice patterns, correlations between switching and perseveration, and molar choice proportions. It is shown that ATM can account for all of these data sets, while the other models produce more limited fits. However, rather than argue that ATM is the singular model for concurrent VI VI choice, a consideration of its concept space leads to the conclusion that operant choice is multiply-determined, and that an adaptive viewpoint-one that considers experimental procedures both as selecting mechanisms for animal choice as well as tests of the controlling variables of that choice-is warranted.


Subject(s)
Choice Behavior , Conditioning, Operant , Choice Behavior/physiology , Animals , Conditioning, Operant/physiology , Reinforcement Schedule , Time Factors , Models, Psychological , Reinforcement, Psychology , Markov Chains
3.
F1000Res ; 13: 116, 2024.
Article in English | MEDLINE | ID: mdl-38779314

ABSTRACT

Background: Motor learning is central to human existence, such as learning to speak or walk, sports moves, or rehabilitation after injury. Evidence suggests that all forms of motor learning share an evolutionarily conserved molecular plasticity pathway. Here, we present novel insights into the neural processes underlying operant self-learning, a form of motor learning in the fruit fly Drosophila. Methods: We operantly trained wild type and transgenic Drosophila fruit flies, tethered at the torque meter, in a motor learning task that required them to initiate and maintain turning maneuvers around their vertical body axis (yaw torque). We combined this behavioral experiment with transgenic peptide expression, CRISPR/Cas9-mediated, spatio-temporally controlled gene knock-out and confocal microscopy. Results: We find that expression of atypical protein kinase C (aPKC) in direct wing steering motoneurons co-expressing the transcription factor FoxP is necessary for this type of motor learning and that aPKC likely acts via non-canonical pathways. We also found that it takes more than a week for CRISPR/Cas9-mediated knockout of FoxP in adult animals to impair motor learning, suggesting that adult FoxP expression is required for operant self-learning. Conclusions: Our experiments suggest that, for operant self-learning, a type of motor learning in Drosophila, co-expression of atypical protein kinase C (aPKC) and the transcription factor FoxP is necessary in direct wing steering motoneurons. Some of these neurons control the wing beat amplitude when generating optomotor responses, and we have discovered modulation of optomotor behavior after operant self-learning. We also discovered that aPKC likely acts via non-canonical pathways and that FoxP expression is also required in adult flies.


Subject(s)
Drosophila Proteins , Drosophila melanogaster , Motor Neurons , Protein Kinase C , Animals , Protein Kinase C/metabolism , Motor Neurons/physiology , Motor Neurons/metabolism , Drosophila Proteins/metabolism , Drosophila Proteins/genetics , Drosophila melanogaster/physiology , Learning/physiology , Forkhead Transcription Factors/metabolism , Wings, Animal/physiology , Animals, Genetically Modified , Neuronal Plasticity/physiology , Conditioning, Operant/physiology , CRISPR-Cas Systems , Drosophila/physiology
4.
Neurobiol Learn Mem ; 211: 107926, 2024 May.
Article in English | MEDLINE | ID: mdl-38579897

ABSTRACT

Learning to stop responding is a fundamental process in instrumental learning. Animals may learn to stop responding under a variety of conditions that include punishment-where the response earns an aversive stimulus in addition to a reinforcer-and extinction-where a reinforced response now earns nothing at all. Recent research suggests that punishment and extinction may be related manifestations of a common retroactive interference process. In both paradigms, animals learn to stop performing a specific response in a specific context, suggesting direct inhibition of the response by the context. This process may depend on the infralimbic cortex (IL), which has been implicated in a variety of interference-based learning paradigms including extinction and habit learning. Despite the behavioral parallels between extinction and punishment, a corresponding role for IL in punishment has not been identified. Here we report that, in a simple arrangement where either punishment or extinction was conducted in a context that differed from the context in which the behavior was first acquired, IL inactivation reduced response suppression in the inhibitory context, but not responding when it "renewed" in the original context. In a more complex arrangement in which two responses were first trained in different contexts and then extinguished or punished in the opposite one, IL inactivation had no effect. The results advance our understanding of the effects of IL in retroactive interference and the behavioral mechanisms that can produce suppression of a response.


Subject(s)
Conditioning, Operant , Extinction, Psychological , Punishment , Extinction, Psychological/physiology , Animals , Conditioning, Operant/physiology , Male , Rats , Rats, Long-Evans , Prefrontal Cortex/physiology , Muscimol/pharmacology
5.
J Physiol ; 602(9): 2107-2126, 2024 May.
Article in English | MEDLINE | ID: mdl-38568869

ABSTRACT

We are studying the mechanisms of H-reflex operant conditioning, a simple form of learning. Modelling studies in the literature and our previous data suggested that changes in the axon initial segment (AIS) might contribute. To explore this, we used blinded quantitative histological and immunohistochemical methods to study in adult rats the impact of H-reflex conditioning on the AIS of the spinal motoneuron that produces the reflex. Successful, but not unsuccessful, H-reflex up-conditioning was associated with greater AIS length and distance from soma; greater length correlated with greater H-reflex increase. Modelling studies in the literature suggest that these increases may increase motoneuron excitability, supporting the hypothesis that they may contribute to H-reflex increase. Up-conditioning did not affect AIS ankyrin G (AnkG) immunoreactivity (IR), p-p38 protein kinase IR, or GABAergic terminals. Successful, but not unsuccessful, H-reflex down-conditioning was associated with more GABAergic terminals on the AIS, weaker AnkG-IR, and stronger p-p38-IR. More GABAergic terminals and weaker AnkG-IR correlated with greater H-reflex decrease. These changes might potentially contribute to the positive shift in motoneuron firing threshold underlying H-reflex decrease; they are consistent with modelling suggesting that sodium channel change may be responsible. H-reflex down-conditioning did not affect AIS dimensions. This evidence that AIS plasticity is associated with and might contribute to H-reflex conditioning adds to evidence that motor learning involves both spinal and brain plasticity, and both neuronal and synaptic plasticity. AIS properties of spinal motoneurons are likely to reflect the combined influence of all the motor skills that share these motoneurons. KEY POINTS: Neuronal action potentials normally begin in the axon initial segment (AIS). AIS plasticity affects neuronal excitability in development and disease. Whether it does so in learning is unknown. Operant conditioning of a spinal reflex, a simple learning model, changes the rat spinal motoneuron AIS. Successful, but not unsuccessful, H-reflex up-conditioning is associated with greater AIS length and distance from soma. Successful, but not unsuccessful, down-conditioning is associated with more AIS GABAergic terminals, less ankyrin G, and more p-p38 protein kinase. The associations between AIS plasticity and successful H-reflex conditioning are consistent with those between AIS plasticity and functional changes in development and disease, and with those predicted by modelling studies in the literature. Motor learning changes neurons and synapses in spinal cord and brain. Because spinal motoneurons are the final common pathway for behaviour, their AIS properties probably reflect the combined impact of all the behaviours that use these motoneurons.


Subject(s)
Axon Initial Segment , H-Reflex , Motor Neurons , Rats, Sprague-Dawley , Animals , Motor Neurons/physiology , Rats , Male , H-Reflex/physiology , Axon Initial Segment/physiology , Learning/physiology , Spinal Cord/physiology , Spinal Cord/cytology , Axons/physiology , Neuronal Plasticity/physiology , Conditioning, Operant/physiology , Ankyrins/metabolism
6.
Behav Brain Res ; 468: 115015, 2024 Jun 25.
Article in English | MEDLINE | ID: mdl-38670533

ABSTRACT

This study examined the effect of knockout of KCNMA1 gene, coding for the BK channel, on cognitive and attentional functions in mice, with an aim to better understand its implications for human neurodevelopmental disorders. The study used the 3-choice serial reaction time task (3-CSRTT) to assess the learning performance, attentional abilities, and repetitive behaviors in mice lacking the KCNMA1 gene (KCNMA1-/-) compared to wild-type (WT) controls. Results showed no significant differences in learning accuracy between the two groups. However, KCNMA1-/- mice were more prone to omitting responses to stimuli. In addition, when the timing of cue presentation was randomized, the KCNMA1-/- showed premature responses. Notably, these mice also demonstrated a marked reduction in perseverative responses, which include repeated nose-poke behaviors following decisions. These findings highlight the involvement of the KCNMA1 gene in managing attention, impulsivity, and potentially moderating repetitive actions.


Subject(s)
Attention , Conditioning, Operant , Large-Conductance Calcium-Activated Potassium Channel alpha Subunits , Mice, Knockout , Animals , Attention/physiology , Male , Large-Conductance Calcium-Activated Potassium Channel alpha Subunits/genetics , Conditioning, Operant/physiology , Mice, Inbred C57BL , Mice , Reaction Time/physiology , Impulsive Behavior/physiology
7.
Nat Commun ; 15(1): 3419, 2024 Apr 24.
Article in English | MEDLINE | ID: mdl-38658545

ABSTRACT

Songs constitute a complex system of vocal signals for inter-individual communication in songbirds. Here, we elucidate the flexibility which songbirds exhibit in the organizing and sequencing of syllables within their songs. Utilizing a newly devised song decoder for quasi-real-time annotation, we execute an operant conditioning paradigm, with rewards contingent upon specific syllable syntax. Our analysis reveals that birds possess the capacity to modify the contents of their songs, adjust the repetition length of particular syllables and employing specific motifs. Notably, birds altered their syllable sequence in a goal-directed manner to obtain rewards. We demonstrate that such modulation occurs within a distinct song segment, with adjustments made within 10 minutes after cue presentation. Additionally, we identify the involvement of the parietal-basal ganglia pathway in orchestrating these flexible modulations of syllable sequences. Our findings unveil an unappreciated aspect of songbird communication, drawing parallels with human speech.


Subject(s)
Vocalization, Animal , Animals , Vocalization, Animal/physiology , Male , Conditioning, Operant/physiology , Finches/physiology , Goals , Basal Ganglia/physiology , Songbirds/physiology
8.
PLoS One ; 19(3): e0300338, 2024.
Article in English | MEDLINE | ID: mdl-38512998

ABSTRACT

Operant conditioning of neural activation has been researched for decades in humans and animals. Many theories suggest two parallel learning processes, implicit and explicit. The degree to which feedback affects these processes individually remains to be fully understood and may contribute to a large percentage of non-learners. Our goal is to determine the explicit decision-making processes in response to feedback representing an operant conditioning environment. We developed a simulated operant conditioning environment based on a feedback model of spinal reflex excitability, one of the simplest forms of neural operant conditioning. We isolated the perception of the feedback signal from self-regulation of an explicit unskilled visuomotor task, enabling us to quantitatively examine feedback strategy. Our hypothesis was that feedback type, biological variability, and reward threshold affect operant conditioning performance and operant strategy. Healthy individuals (N = 41) were instructed to play a web application game using keyboard inputs to rotate a virtual knob representative of an operant strategy. The goal was to align the knob with a hidden target. Participants were asked to "down-condition" the amplitude of the virtual feedback signal, which was achieved by placing the knob as close as possible to the hidden target. We varied feedback type (knowledge of performance, knowledge of results), biological variability (low, high), and reward threshold (easy, moderate, difficult) in a factorial design. Parameters were extracted from real operant conditioning data. Our main outcomes were the feedback signal amplitude (performance) and the mean change in dial position (operant strategy). We observed that performance was modulated by variability, while operant strategy was modulated by feedback type. These results show complex relations between fundamental feedback parameters and provide the principles for optimizing neural operant conditioning for non-responders.


Subject(s)
Conditioning, Operant , Learning , Animals , Humans , Feedback , Conditioning, Operant/physiology , H-Reflex/physiology , Motivation
9.
Physiol Behav ; 278: 114511, 2024 May 01.
Article in English | MEDLINE | ID: mdl-38479582

ABSTRACT

Successive negative contrast (SNC) has been used to study reward relativity, reward loss, and frustration for decades. In instrumental SNC (iSNC), the anticipatory performance of animals downshifted from a large reward to a small reward is compared to that of animals always reinforced with the small reward. iSNC involves a transient deterioration of anticipatory behavior in downshifted animals compared to unshifted controls. There is scattered information on the optimal parameters to produce this effect and even less information about its neural basis. Five experiments with rats trained in a runway to collect food pellets explored the effects of trial distribution (massed or spaced), amount of preshift training, reward disparity, and reward magnitude on the development of an iSNC effect. Start, run, and goal latencies were measured. Using spaced trials (one trial per day), evidence of the iSNC effect was observed with 24 preshift trials and a 32-to-4 pellet disparity. With massed trials (4 trials per session separated by 30-s intertrial intervals), evidence of iSNC was found with 12 preshift sessions (a total of 48 trials) and a 16-to-2 pellet disparity. The massed-training procedure was then used to assess neural activity in three prefrontal cortex areas using c-Fos expression in animals perfused after the first downshift session. There was evidence of increased activation in the anterior cingulate cortex and a trend toward increased activation in the infralimbic and prelimbic cortices. These procedures open a venue for studying the neural basis of the instrumental behavior of animals that experience reward loss.


Subject(s)
Conditioning, Operant , Reward , Rats , Animals , Conditioning, Operant/physiology , Motivation , Prefrontal Cortex
10.
Neuroscience ; 546: 20-32, 2024 May 14.
Article in English | MEDLINE | ID: mdl-38521480

ABSTRACT

Evidence suggests that dopamine activity provides a US-related prediction error for Pavlovian conditioning and the reinforcement signal supporting the acquisition of habits. However, its role in goal-directed action is less clear. There are currently few studies that have assessed dopamine release as animals acquire and perform self-paced instrumental actions. Here we briefly review the literature documenting the psychological, behavioral and neural bases of goal-directed actions in rats and mice, before turning to describe recent studies investigating the role of dopamine in instrumental learning and performance. Plasticity in dorsomedial striatum, a central node in the network supporting goal-directed action, clearly requires dopamine release, the timing of which, relative to cortical and thalamic inputs, determines the degree and form of that plasticity. Beyond this, bilateral release appears to reflect reward prediction errors as animals experience the consequences of an action. Such signals feedforward to update the value of the specific action associated with that outcome during subsequent performance, with dopamine release at the time of action reflecting the updated predicted action value. More recently, evidence has also emerged for a hemispherically lateralised signal associated with the action; dopamine release is greater in the hemisphere contralateral to the spatial target of the action. This effect emerges over the course of acquisition and appears to reflect the strength of the action-outcome association. Thus, during goal-directed action, dopamine release signals the action, the outcome and their association to shape the learning and performance processes necessary to support this form of behavioral control.


Subject(s)
Corpus Striatum , Dopamine , Goals , Animals , Dopamine/metabolism , Corpus Striatum/metabolism , Humans , Conditioning, Operant/physiology , Reward
11.
Neurobiol Learn Mem ; 211: 107915, 2024 May.
Article in English | MEDLINE | ID: mdl-38527649

ABSTRACT

Rat autoshaping procedures generate two readily measurable conditioned responses: During lever presentations that have previously signaled food, rats approach the food well (called goal-tracking) and interact with the lever itself (called sign-tracking). We investigated how reinforced and nonreinforced trials affect the overall and temporal distributions of these two responses across 10-second lever presentations. In two experiments, reinforced trials generated more goal-tracking than sign-tracking, and nonreinforced trials resulted in a larger reduction in goal-tracking than sign-tracking. The effect of reinforced trials was evident as an increase in goal-tracking and reduction in sign-tracking across the duration of the lever presentations, and nonreinforced trials resulted in this pattern transiently reversing and then becoming less evident with further training. These dissociations are consistent with a recent elaboration of the Rescorla-Wagner model, HeiDI (Honey, R.C., Dwyer, D.M., & Iliescu, A.F. (2020a). HeiDI: A model for Pavlovian learning and performance with reciprocal associations. Psychological Review, 127, 829-852.), a model in which responses related to the nature of the unconditioned stimulus (e.g., goal-tracking) have a different origin than those related to the nature of the conditioned stimulus (e.g., sign-tracking).


Subject(s)
Conditioning, Classical , Reinforcement, Psychology , Animals , Male , Rats , Conditioning, Classical/physiology , Conditioning, Operant/physiology , Goals , Behavior, Animal/physiology
12.
J Neurosci ; 44(17)2024 Apr 24.
Article in English | MEDLINE | ID: mdl-38514180

ABSTRACT

Deciding on a course of action requires both an accurate estimation of option values and the right amount of effort invested in deliberation to reach sufficient confidence in the final choice. In a previous study, we have provided evidence, across a series of judgment and choice tasks, for a dissociation between the ventromedial prefrontal cortex (vmPFC), which would represent option values, and the dorsomedial prefrontal cortex (dmPFC), which would represent the duration of deliberation. Here, we first replicate this dissociation and extend it to the case of an instrumental learning task, in which 24 human volunteers (13 women) choose between options associated with probabilistic gains and losses. According to fMRI data recorded during decision-making, vmPFC activity reflects the sum of option values generated by a reinforcement learning model and dmPFC activity the deliberation time. To further generalize the role of the dmPFC in mobilizing effort, we then analyze fMRI data recorded in the same participants while they prepare to perform motor and cognitive tasks (squeezing a handgrip or making numerical comparisons) to maximize gains or minimize losses. In both cases, dmPFC activity is associated with the output of an effort regulation model, and not with response time. Taken together, these results strengthen a general theory of behavioral control that implicates the vmPFC in the estimation of option values and the dmPFC in the energization of relevant motor and cognitive processes.


Subject(s)
Magnetic Resonance Imaging , Prefrontal Cortex , Humans , Prefrontal Cortex/physiology , Prefrontal Cortex/diagnostic imaging , Female , Male , Adult , Young Adult , Decision Making/physiology , Choice Behavior/physiology , Brain Mapping/methods , Reaction Time/physiology , Psychomotor Performance/physiology , Conditioning, Operant/physiology , Judgment/physiology
13.
Learn Mem ; 31(3)2024 Mar.
Article in English | MEDLINE | ID: mdl-38527752

ABSTRACT

From early in life, we encounter both controllable environments, in which our actions can causally influence the reward outcomes we experience, and uncontrollable environments, in which they cannot. Environmental controllability is theoretically proposed to organize our behavior. In controllable contexts, we can learn to proactively select instrumental actions that bring about desired outcomes. In uncontrollable environments, Pavlovian learning enables hard-wired, reflexive reactions to anticipated, motivationally salient events, providing "default" behavioral responses. Previous studies characterizing the balance between Pavlovian and instrumental learning systems across development have yielded divergent findings, with some studies observing heightened expression of Pavlovian learning during adolescence and others observing a reduced influence of Pavlovian learning during this developmental stage. In this study, we aimed to investigate whether a theoretical model of controllability-dependent arbitration between learning systems might explain these seemingly divergent findings in the developmental literature, with the specific hypothesis that adolescents' action selection might be particularly sensitive to environmental controllability. To test this hypothesis, 90 participants, aged 8-27, performed a probabilistic-learning task that enables estimation of Pavlovian influence on instrumental learning, across both controllable and uncontrollable conditions. We fit participants' data with a reinforcement-learning model in which controllability inferences adaptively modulate the dominance of Pavlovian versus instrumental control. Relative to children and adults, adolescents exhibited greater flexibility in calibrating the expression of Pavlovian bias to the degree of environmental controllability. These findings suggest that sensitivity to environmental reward statistics that organize motivated behavior may be heightened during adolescence.


Subject(s)
Conditioning, Classical , Learning , Adult , Child , Humans , Adolescent , Conditioning, Classical/physiology , Learning/physiology , Reinforcement, Psychology , Conditioning, Operant/physiology , Reward
14.
Behav Processes ; 217: 105012, 2024 Apr.
Article in English | MEDLINE | ID: mdl-38493970

ABSTRACT

It is generally believed that termites can't learn and are not "intelligent". This study aimed to test whether termites could have any form of memory. A Y-shaped test device with one release chamber and two identical test chambers was designed and constructed by 3D printing. A colony of damp wood termites was harvested from the wild. Worker termites were randomly selected for experiment. Repellent odors that could mimic the alarm pheromone for termites were first identified. Among all substances tested, a tea tree oil and lemon juice were found to contain repellent odors for the tested termites, as they significantly reduced the time that termites spent in the chamber treated with these substances. As control, a trail pheromone was found to be attractive. Subsequently, a second cohort of termites were operant conditioned by punishment using both tea tree oil and lemon juice, and then tested for their ability to remember the path that could lead to the repellant odors. The test device was thoroughly cleaned between trials. It was found that conditioned termites displayed a reduced tendency to choose the path that led to expectant punishment as compared with naïve termites. Thus, it is concluded that damp wood termites are capable of learning and forming "fear memory", indicative of "intelligence" in termites. This result challenges established presumption about termites' intelligence.


Subject(s)
Isoptera , Odorants , Isoptera/physiology , Animals , Conditioning, Operant/physiology , Pheromones/pharmacology , Memory/physiology , Learning/physiology , Tea Tree Oil/pharmacology , Citrus , Insect Repellents/pharmacology , Behavior, Animal/physiology , Punishment
15.
Brain Nerve ; 76(3): 273-281, 2024 Mar.
Article in Japanese | MEDLINE | ID: mdl-38514108

ABSTRACT

Learning is classified into two types: "classical conditioning," which modifies simple reflexes, and "operant conditioning," which modifies complex voluntary behaviors. The neural circuits underlying these two types differ significantly. During the learning process of operant conditioning tasks, various changes in firing rate and firing synchrony of neurons can be observed across multiple brain regions. Additionally, neuronal firing rate and synchrony in several brain regions can be voluntarily controlled through operant conditioning. Consequently, it is evident that neurons in widespread brain regions have the potential for plastic changes to facilitate learning. It may be suggested that the learning of complex voluntary behaviors is underlined by widespread dynamic changes in neural activity and is not restricted to only a few brain regions.


Subject(s)
Learning , Neurons , Humans , Neurons/physiology , Conditioning, Operant/physiology , Conditioning, Classical/physiology , Brain
16.
J Appl Behav Anal ; 57(2): 455-462, 2024 Mar.
Article in English | MEDLINE | ID: mdl-38438320

ABSTRACT

Functional communication training (FCT) is an evidence-based treatment for behavior targeted for reduction that often combines extinction for target responses and arranges functionally equivalent reinforcement for alternative behavior. Long-term effectiveness of FCT can become compromised when transitioning from clinic to nonclinic contexts or thinning reinforcement schedules for appropriate behavior. Such increases in targeted behavior have been conceptualized as renewal and resurgence, respectively. The relation between resurgence and renewal has yet to be reported. Therefore, the present report retrospectively analyzed the relation between renewal and resurgence in data collected when implementing FCT with children diagnosed with developmental disabilities. We found no relation when evaluating all 34 individuals assessed for resurgence and renewal or a subset of individuals exhibiting both resurgence and renewal. These findings suggest that one form of relapse may not be predictive of another form of relapse.


Subject(s)
Behavior Therapy , Extinction, Psychological , Child , Humans , Retrospective Studies , Extinction, Psychological/physiology , Reinforcement, Psychology , Recurrence , Reinforcement Schedule , Conditioning, Operant/physiology
17.
Anim Cogn ; 27(1): 11, 2024 Mar 02.
Article in English | MEDLINE | ID: mdl-38429608

ABSTRACT

Optimal foraging theory suggests that animals make decisions which maximize their food intake per unit time when foraging, but the mechanisms animals use to track the value of behavioral alternatives and choose between them remain unclear. Several models for how animals integrate past experience have been suggested. However, these models make differential predictions for the occurrence of spontaneous recovery of choice: a behavioral phenomenon in which a hiatus from the experimental environment results in animals reverting to a behavioral allocation consistent with a reward distribution from the more distant past, rather than one consistent with their most recently experienced distribution. To explore this phenomenon and compare these models, three free-operant experiments with rats were conducted using a serial reversal design. In Phase 1, two responses (A and B) were baited with pellets on concurrent variable interval schedules, favoring option A. In Phase 2, lever baiting was reversed to favor option B. Rats then entered a delay period, where they were maintained at weight in their home cages and no experimental sessions took place. Following this delay, preference was assessed using initial responding in test sessions where levers were presented, but not baited. Models were compared in performance, including an exponentially weighted moving average, the Temporal Weighting Rule, and variants of these models. While the data provided strong evidence of spontaneous recovery of choice, the form and extent of recovery was inconsistent with the models under investigation. Potential interpretations are discussed in relation to both the decision rule and valuation functions employed.


Subject(s)
Choice Behavior , Conditioning, Operant , Rats , Animals , Choice Behavior/physiology , Conditioning, Operant/physiology , Reward , Behavior, Animal
18.
Neuropsychopharmacology ; 49(6): 915-923, 2024 May.
Article in English | MEDLINE | ID: mdl-38374364

ABSTRACT

Opioid use disorder is a chronic relapsing disorder encompassing misuse, dependence, and addiction to opioid drugs. Long term maintenance of associations between the reinforcing effects of the drug and the cues associated with its intake are a leading cause of relapse. Indeed, exposure to the salient drug-associated cues can lead to drug cravings and drug seeking behavior. The dorsal hippocampus (dHPC) and locus coeruleus (LC) have emerged as important structures for linking the subjective rewarding effects of opioids with environmental cues. However, their role in cue-induced reinstatement of opioid use remains to be further elucidated. In this study, we showed that chemogenetic inhibition of excitatory dHPC neurons during re-exposure to drug-associated cues significantly attenuates cue-induced reinstatement of morphine-seeking behavior. In addition, the same manipulation reduced reinstatement of sucrose-seeking behavior but failed to alter memory recall in the object location task. Finally, intact activity of tyrosine hydroxylase (TH) LC-dHPCTh afferents is necessary to drive cue induced reinstatement of morphine-seeking as inhibition of this pathway blunts cue-induced drug-seeking behavior. Altogether, these studies show an important role of the dHPC and LC-dHPCTh pathway in mediating cue-induced reinstatement of opioid seeking.


Subject(s)
Cues , Drug-Seeking Behavior , Hippocampus , Locus Coeruleus , Self Administration , Animals , Locus Coeruleus/drug effects , Locus Coeruleus/metabolism , Male , Hippocampus/drug effects , Hippocampus/metabolism , Rats , Female , Drug-Seeking Behavior/drug effects , Drug-Seeking Behavior/physiology , Morphine/pharmacology , Morphine/administration & dosage , Rats, Sprague-Dawley , Neural Pathways/drug effects , Neural Pathways/physiology , Analgesics, Opioid/pharmacology , Analgesics, Opioid/administration & dosage , Opioid-Related Disorders/physiopathology , Extinction, Psychological/drug effects , Extinction, Psychological/physiology , Conditioning, Operant/drug effects , Conditioning, Operant/physiology
19.
Cogn Affect Behav Neurosci ; 24(2): 249-265, 2024 Apr.
Article in English | MEDLINE | ID: mdl-38316708

ABSTRACT

Obsessive-compulsive disorder (OCD), a highly prevalent and debilitating disorder, is incompletely understood in terms of underpinning behavioural, psychological, and neural mechanisms. This is attributable to high symptomatic heterogeneity; cardinal features comprise obsessions and compulsions, including clinical subcategories. While obsessive and intrusive thoughts are arguably unique to humans, dysfunctional behaviours analogous to those seen in clinical OCD have been examined in nonhuman animals. Genetic, ethological, pharmacological, and neurobehavioural approaches all contribute to understanding the emergence and persistence of compulsive behaviour. One behaviour of particular interest is maladaptive checking, whereby human patients excessively perform checking rituals despite these serving no purpose. Dysfunctional and excessive checking is the most common symptom associated with OCD and can be readily operationalised in rodents. This review considers animal models of OCD, the neural circuitries associated with impairments in habit-based and goal-directed behaviour, and how these may link to the compulsions observed in OCD. We further review the Observing Response Task (ORT), an appetitive instrumental learning procedure that distinguishes between functional and dysfunctional checking, with translational application in humans and rodents. By shedding light on the psychological and neural bases of compulsive-like checking, the ORT has potential to offer translational insights into the underlying mechanisms of OCD, in addition to being a platform for testing psychological and neurochemical treatment approaches.


Subject(s)
Neuropsychology , Obsessive-Compulsive Disorder , Animals , Humans , Compulsive Behavior/physiopathology , Conditioning, Operant/physiology , Disease Models, Animal , Obsessive-Compulsive Disorder/physiopathology , Neuropsychology/methods
20.
Eur J Neurosci ; 59(7): 1500-1518, 2024 Apr.
Article in English | MEDLINE | ID: mdl-38185906

ABSTRACT

Discrete alcohol cues and contexts are relapse triggers for people with alcohol use disorder exerting particularly powerful control over behaviour when they co-occur. Here, we investigated the neural substrates subserving the capacity for alcohol-associated contexts to elevate responding to an alcohol-predictive conditioned stimulus (CS). Specifically, rats were trained in a distinct 'alcohol context' to respond by entering a fluid port during a discrete auditory CS that predicted the delivery of alcohol and were familiarized with a 'neutral context' wherein alcohol was never available. When conditioned CS responding was tested by presenting the CS without alcohol, we found that augmenting glutamatergic activity in the nucleus accumbens (NAc) shell by microinfusing α-amino-3-hydroxy-5-methyl-4-isoxazolepropionic acid (AMPA) reduced responding to an alcohol CS in an alcohol, but not neutral, context. Further, AMPA microinfusion robustly affected behaviour, attenuating the number, duration and latency of CS responses selectively in the alcohol context. Although dopaminergic inputs to the NAc shell were previously shown to be necessary for CS responding in an alcohol context, here, chemogenetic excitation of ventral tegmental area (VTA) dopamine neurons and their inputs to the NAc shell did not affect CS responding. Critically, chemogenetic excitation of VTA dopamine neurons affected feeding behaviour and elevated c-fos immunoreactivity in the VTA and NAc shell, validating the chemogenetic approach. These findings enrich our understanding of the substrates underlying Pavlovian responding for alcohol and reveal that the capacity for contexts to modulate responding to discrete alcohol cues is delicately underpinned by the NAc shell.


Subject(s)
Cues , Nucleus Accumbens , Humans , Rats , Animals , Nucleus Accumbens/physiology , Rats, Long-Evans , alpha-Amino-3-hydroxy-5-methyl-4-isoxazolepropionic Acid , Ethanol/pharmacology , Conditioning, Operant/physiology
SELECTION OF CITATIONS
SEARCH DETAIL
...