Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 28
Filter
Add more filters










Publication year range
1.
J Med Internet Res ; 25: e39995, 2023 10 19.
Article in English | MEDLINE | ID: mdl-37856180

ABSTRACT

BACKGROUND: Increasing efforts toward the prevention of stress-related mental disorders have created a need for unobtrusive real-life monitoring of stress-related symptoms. Wearable devices have emerged as a possible solution to aid in this process, but their use in real-life stress detection has not been systematically investigated. OBJECTIVE: We aimed to determine the utility of ecological momentary assessments (EMA) and physiological arousal measured through wearable devices in detecting ecologically relevant stress states. METHODS: Using EMA combined with wearable biosensors for ecological physiological assessments (EPA), we investigated the impact of an ecological stressor (ie, a high-stakes examination week) on physiological arousal and affect compared to a control week without examinations in first-year medical and biomedical science students (51/83, 61.4% female). We first used generalized linear mixed-effects models with maximal fitting approaches to investigate the impact of examination periods on subjective stress exposure, mood, and physiological arousal. We then used machine learning models to investigate whether we could use EMA, wearable biosensors, or the combination of both to classify momentary data (ie, beeps) as belonging to examination or control weeks. We tested both individualized models using a leave-one-beep-out approach and group-based models using a leave-one-subject-out approach. RESULTS: During stressful high-stakes examination (versus control) weeks, participants reported increased negative affect and decreased positive affect. Intriguingly, physiological arousal decreased on average during the examination week. Time-resolved analyses revealed peaks in physiological arousal associated with both momentary self-reported stress exposure and self-reported positive affect. Mediation models revealed that the decreased physiological arousal in the examination week was mediated by lower positive affect during the same period. We then used machine learning to show that while individualized EMA outperformed EPA in its ability to classify beeps as originating from examinations or from control weeks (1603/4793, 33.45% and 1648/4565, 36.11% error rates, respectively), a combination of EMA and EPA yields optimal classification (1363/4565, 29.87% error rate). Finally, when comparing individualized models to group-based models, we found that the individualized models significantly outperformed the group-based models across all 3 inputs (EMA, EPA, and the combination). CONCLUSIONS: This study underscores the potential of wearable biosensors for stress-related mental health monitoring. However, it emphasizes the necessity of psychological context in interpreting physiological arousal captured by these devices, as arousal can be related to both positive and negative contexts. Moreover, our findings support a personalized approach in which momentary stress is optimally detected when referenced against an individual's own data.


Subject(s)
Biosensing Techniques , Wearable Electronic Devices , Humans , Female , Male , Affect , Self Report , Stress, Psychological/diagnosis , Ecological Momentary Assessment
2.
Cognition ; 240: 105603, 2023 11.
Article in English | MEDLINE | ID: mdl-37647742

ABSTRACT

The willingness to exert effort for reward is essential but comes at the cost of fatigue. Theories suggest fatigue increases after both physical and cognitive exertion, subsequently reducing the motivation to exert effort. Yet a mechanistic understanding of how this happens on a moment-to-moment basis, and whether mechanisms are common to both mental and physical effort, is lacking. In two studies, participants reported momentary (trial-by-trial) ratings of fatigue during an effort-based decision-making task requiring either physical (grip-force) or cognitive (mental arithmetic) effort. Using a novel computational model, we show that fatigue fluctuates from trial-to-trial as a function of exerted effort and predicts subsequent choices. This mechanism was shared across the domains. Selective to the cognitive domain, committing errors also induced momentary increases in feelings of fatigue. These findings provide insight into the computations underlying the influence of effortful exertion on fatigue and motivation, in both physical and cognitive domains.


Subject(s)
Emotions , Motivation , Humans , Reward , Cognition
3.
Cogn Affect Behav Neurosci ; 23(3): 691-704, 2023 06.
Article in English | MEDLINE | ID: mdl-37058212

ABSTRACT

Signals related to uncertainty are frequently observed in regions of the cognitive control network, including anterior cingulate/medial prefrontal cortex (ACC/mPFC), dorsolateral prefrontal cortex (dlPFC), and anterior insular cortex. Uncertainty generally refers to conditions in which decision variables may assume multiple possible values and can arise at multiple points in the perception-action cycle, including sensory input, inferred states of the environment, and the consequences of actions. These sources of uncertainty are frequently correlated: noisy input can lead to unreliable estimates of the state of the environment, with consequential influences on action selection. Given this correlation amongst various sources of uncertainty, dissociating the neural structures underlying their estimation presents an ongoing issue: a region associated with uncertainty related to outcomes may estimate outcome uncertainty itself, or it may reflect a cascade effect of state uncertainty on outcome estimates. In this study, we derive signals of state and outcome uncertainty from mathematical models of risk and observe regions in the cognitive control network whose activity is best explained by signals related to state uncertainty (anterior insula), outcome uncertainty (dlPFC), as well as regions that appear to integrate the two (ACC/mPFC).


Subject(s)
Magnetic Resonance Imaging , Prefrontal Cortex , Humans , Uncertainty , Gyrus Cinguli , Cognition
4.
Cogn Affect Behav Neurosci ; 23(4): 1129-1140, 2023 08.
Article in English | MEDLINE | ID: mdl-37059875

ABSTRACT

The notion that humans avoid effortful action is one of the oldest and most persistent in psychology. Influential theories of effort propose that effort valuations are made according to a cost-benefit trade-off: we tend to invest mental effort only when the benefits outweigh the costs. While these models provide a useful conceptual framework, the affective components of effort valuation remain poorly understood. Here, we examined whether primitive components of affective response-positive and negative valence, captured via facial electromyography (fEMG)-can be used to better understand valuations of cognitive effort. Using an effortful arithmetic task, we find that fEMG activity in the corrugator supercilii-thought to index negative valence-1) tracks the anticipation and exertion of cognitive effort and 2) is attenuated in the presence of high rewards. Together, these results suggest that activity in the corrugator reflects the integration of effort costs and rewards during effortful decision-making.


Subject(s)
Decision Making , Emotions , Humans , Decision Making/physiology , Reward
5.
Psychol Rev ; 130(4): 1081-1103, 2023 07.
Article in English | MEDLINE | ID: mdl-35679204

ABSTRACT

An increasing number of cognitive, neurobiological, and computational models have been proposed in the last decade, seeking to explain how humans allocate physical or cognitive effort. Most models share conceptual similarities with motivational intensity theory (MIT), an influential classic psychological theory of motivation. Yet, little effort has been made to integrate such models, which remain confined within the explanatory level for which they were developed, that is, psychological, computational, neurobiological, and neuronal. In this critical review, we derive novel analyses of three recent computational and neuronal models of effort allocation-the expected value of control theory, the reinforcement meta-learner (RML) model, and the neuronal model of attentional effort-and establish a formal relationship between these models and MIT. Our analyses reveal striking similarities between predictions made by these models, with a shared key tenet: a nonmonotonic relationship between perceived task difficulty and effort, following a sawtooth or inverted U shape. In addition, the models converge on the proposition that the dorsal anterior cingulate cortex may be responsible for determining the allocation of effort and cognitive control. We conclude by discussing the distinct contributions and strengths of each theory toward understanding neurocomputational processes of effort allocation. Finally, we highlight the necessity for a unified understanding of effort allocation, by drawing novel connections between different theorizing of adaptive effort allocation as described by the presented models. (PsycInfo Database Record (c) 2023 APA, all rights reserved).


Subject(s)
Motivation , Reinforcement, Psychology , Humans , Attention , Psychological Theory
6.
J Exp Psychol Gen ; 151(10): 2324-2341, 2022 Oct.
Article in English | MEDLINE | ID: mdl-35389742

ABSTRACT

In keeping with the view that individuals invest cognitive effort in accordance with its relative costs and benefits, reward incentives typically improve performance in tasks that require cognitive effort. At the same time, increasing effort investment may confer larger or smaller performance benefits-that is, the marginal value of effort-depending on the situation or context. On this view, we hypothesized that the magnitude of reward-induced effort modulations should depend critically on the marginal value of effort for the given context, and furthermore, the marginal value of effort of a context should be learned over time as a function of direct experience in the context. Using two well-characterized cognitive control tasks and simple computational models, we demonstrated that individuals appear to learn the marginal value of effort for different contexts. In a task-switching paradigm (Experiment 1), we found that participants initially exhibited reward-induced switch cost reductions across contexts-here, task switch rates-but over time learned to only increase effort in contexts with a comparatively larger marginal utility of effort. Similarly, in a flanker task (Experiment 2), we observed a similar learning effect across contexts defined by the proportion of incongruent trials. Together, these results enrich theories of cost-benefit effort decision-making by highlighting the importance of the (learned) marginal utility of cognitive effort. (PsycInfo Database Record (c) 2022 APA, all rights reserved).


Subject(s)
Decision Making , Reward , Food , Humans , Learning , Motivation
7.
Psychol Med ; 52(2): 303-313, 2022 01.
Article in English | MEDLINE | ID: mdl-32538342

ABSTRACT

BACKGROUND: Classic theories posit that depression is driven by a negative learning bias. Most studies supporting this proposition used small and selected samples, excluding patients with comorbidities. However, comorbidity between psychiatric disorders occurs in up to 70% of the population. Therefore, the generalizability of the negative bias hypothesis to a naturalistic psychiatric sample as well as the specificity of the bias to depression, remain unclear. In the present study, we tested the negative learning bias hypothesis in a large naturalistic sample of psychiatric patients, including depression, anxiety, addiction, attention-deficit/hyperactivity disorder, and/or autism. First, we assessed whether the negative bias hypothesis of depression generalized to a heterogeneous (and hence more naturalistic) depression sample compared with controls. Second, we assessed whether negative bias extends to other psychiatric disorders. Third, we adopted a dimensional approach, by using symptom severity as a way to assess associations across the sample. METHODS: We administered a probabilistic reversal learning task to 217 patients and 81 healthy controls. According to the negative bias hypothesis, participants with depression should exhibit enhanced learning and flexibility based on punishment v. reward. We combined analyses of traditional measures with more sensitive computational modeling. RESULTS: In contrast to previous findings, this sample of depressed patients with psychiatric comorbidities did not show a negative learning bias. CONCLUSIONS: These results speak against the generalizability of the negative learning bias hypothesis to depressed patients with comorbidities. This study highlights the importance of investigating unselected samples of psychiatric patients, which represent the vast majority of the psychiatric population.


Subject(s)
Depression , Reversal Learning , Anxiety Disorders/epidemiology , Depression/epidemiology , Humans , Punishment , Reward
8.
Mind Brain Educ ; 15(4): 354-370, 2021 Nov.
Article in English | MEDLINE | ID: mdl-35875415

ABSTRACT

As the field of educational neuroscience continues to grow, questions have emerged regarding the ecological validity and applicability of this research to educational practice. Recent advances in mobile neuroimaging technologies have made it possible to conduct neuroscientific studies directly in naturalistic learning environments. We propose that embedding mobile neuroimaging research in a cycle (Matusz, Dikker, Huth, & Perrodin, 2019), involving lab-based, seminaturalistic, and fully naturalistic experiments, is well suited for addressing educational questions. With this review, we take a cautious approach, by discussing the valuable insights that can be gained from mobile neuroimaging technology, including electroencephalography and functional near-infrared spectroscopy, as well as the challenges posed by bringing neuroscientific methods into the classroom. Research paradigms used alongside mobile neuroimaging technology vary considerably. To illustrate this point, studies are discussed with increasingly naturalistic designs. We conclude with several ethical considerations that should be taken into account in this unique area of research.

9.
Article in English | MEDLINE | ID: mdl-33082119

ABSTRACT

BACKGROUND: Prior work has proposed that major depressive disorder (MDD) is associated with a specific cognitive bias: patients with depression seem to learn more from punishment than from reward. This learning bias has been associated with blunting of reward-related neural responses in the striatum. A key question is whether negative learning bias is also present in patients with MDD and comorbid disorders and whether this bias is specific to depression or shared across disorders. METHODS: We employed a transdiagnostic approach assessing a heterogeneous group of (nonpsychotic) psychiatric patients from the MIND-Set (Measuring Integrated Novel Dimensions in Neurodevelopmental and Stress-Related Mental Disorders) cohort with and without MDD but also with anxiety, attention-deficit/hyperactivity disorder, and/or autism (n = 66) and healthy control subjects (n = 24). To investigate reward and punishment learning, we employed a deterministic reversal learning task with functional magnetic resonance imaging. RESULTS: In contrast to previous studies, patients with MDD did not exhibit impaired reward learning or reduced reward-related neural activity anywhere in the brain. Interestingly, we observed consistently increased neural responses in the bilateral lateral prefrontal cortex of patients when they received a surprising reward. This increase was not specific to MDD, but generalized to anxiety, attention-deficit/hyperactivity disorder, and autism. Critically, increased prefrontal activity to surprising reward scaled with transdiagnostic symptom severity, particularly that associated with concentration and attention, as well as the number of diagnoses; patients with more comorbidities showed a stronger prefrontal response to surprising reward. CONCLUSIONS: Prefrontal enhancement may reflect compensatory working memory recruitment, possibly to counteract the inability to swiftly update reward expectations. This neural mechanism may provide a candidate transdiagnostic index of psychiatric severity.


Subject(s)
Depressive Disorder, Major , Depression , Humans , Learning , Punishment , Reward
10.
J Exp Psychol Gen ; 150(2): 306-313, 2021 Feb.
Article in English | MEDLINE | ID: mdl-32790463

ABSTRACT

Although people seek to avoid expenditure of cognitive effort, reward incentives can increase investment of processing resources in challenging situations that require cognitive control, resulting in improved performance. At the same time, subjective value is relative, rather than absolute: The value of a reward is increased if the local context is reward-poor versus reward-rich. Although this notion is supported by work in economics and psychology, we propose that reward relativity should also play a critical role in the cost-benefit computations that inform cognitive effort allocation. Here we demonstrate that reward-induced cognitive effort allocation in a task-switching paradigm is sensitive to reward context, consistent with the notion of relative value. Informed by predictions of a computational model of divisive reward normalization, we demonstrate that reward-induced switch cost reductions depend critically upon reward context, such that the same reward amount engenders greater control allocation in impoverished versus rich reward context. Succinctly, these results confirm that reward relativity factors into the value computation driving effort allocation, revealing that motivated cognitive control, like choice, is all relative. (PsycInfo Database Record (c) 2021 APA, all rights reserved).


Subject(s)
Cognition/physiology , Executive Function/physiology , Reward , Decision Making/physiology , Humans , Reaction Time/physiology
11.
Nat Hum Behav ; 4(4): 412-422, 2020 04.
Article in English | MEDLINE | ID: mdl-31932692

ABSTRACT

Activity in the dorsal anterior cingulate cortex (dACC) is observed across a variety of contexts, and its function remains intensely debated in the field of cognitive neuroscience. While traditional views emphasize its role in inhibitory control (suppressing prepotent, incorrect actions), recent proposals suggest a more active role in motivated control (invigorating actions to obtain rewards). Lagging behind empirical findings, formal models of dACC function primarily focus on inhibitory control, highlighting surprise, choice difficulty and value of control as key computations. Although successful in explaining dACC involvement in inhibitory control, it remains unclear whether these mechanisms generalize to motivated control. In this study, we derive predictions from three prominent accounts of dACC and test these with functional magnetic resonance imaging during value-based decision-making under time pressure. We find that the single mechanism of surprise best accounts for activity in dACC during a task requiring response invigoration, suggesting surprise signalling as a shared driver of inhibitory and motivated control.


Subject(s)
Decision Making/physiology , Gyrus Cinguli/physiology , Adult , Anticipation, Psychological/physiology , Female , Functional Neuroimaging , Gyrus Cinguli/diagnostic imaging , Humans , Magnetic Resonance Imaging , Male , Models, Neurological , Reaction Time , Reward , Young Adult
12.
Psychophysiology ; 57(2): e13481, 2020 02.
Article in English | MEDLINE | ID: mdl-31578739

ABSTRACT

Reward processing is influenced by reward magnitude, as previous EEG studies showed changes in amplitude of the feedback-related negativity (FRN) and reward positivity (RewP), or power of fronto-medial theta (FMθ). However, it remains unclear whether these changes are driven by increased reward sensitivity, altered reward predictions, enhanced cognitive control, or a combination of these effects. To address this question, we asked 36 participants to perform a simple gambling task where feedback valence (reward vs. no-reward), its magnitude (small vs. large reward), and expectancy (expected vs. unexpected) were manipulated in a factorial design, while 64-channel EEG was recorded concurrently. We performed standard ERP analyses (FRN and RewP) as well as time-frequency decompositions (FMθ) of feedback-locked EEG data. Subjective reports showed that large rewards were more liked and expected than small ones. At the EEG level, increasing magnitude led to a larger RewP irrespective of expectancy, whereas the FRN was not influenced by this manipulation. In comparison, FMθ power was overall increased when reward magnitude was large, except if it was unexpected. These results show dissociable effects of reward magnitude on the RewP and FMθ power. Further, they suggest, that although large reward magnitude boosts reward processing (RewP), it can nonetheless undermine the need for enhanced cognitive control (FMθ) in case reward is unexpected. We discuss these new results in terms of optimistic bias or positive mood resulting from an increased reward magnitude.


Subject(s)
Anticipation, Psychological/physiology , Evoked Potentials/physiology , Executive Function/physiology , Feedback, Psychological/physiology , Motivation/physiology , Prefrontal Cortex/physiology , Reward , Theta Rhythm/physiology , Adult , Female , Humans , Male , Young Adult
13.
Int J Psychophysiol ; 146: 117-124, 2019 12.
Article in English | MEDLINE | ID: mdl-31644932

ABSTRACT

The ability to exert control has been widely investigated as a hallmark of adaptive behaviour. Dopamine is recognized as the key neuromodulator mediating various control-related processes. The neural mechanisms underlying the subjective perception of being in control, or Locus of Control (LOC) are however less clear. LOC indicates the subjective tendency to attribute environmental outcomes to one's actions (internal LOC) or instead to external incontrollable factors (external LOC). Here we hypothesized that dopamine levels also relate to LOC. Previous work shows that dopamine signaling mediates learning of action-outcome relationships, outcome predictability, and opportunity cost. Prominent theories propose dopamine dysregulation as the key pathogenetic mechanism in schizophrenia and depression. Critically, external LOC is a risk factor for schizophrenia and depression, and predicts increased vulnerability to stress. However, a direct link between LOC and dopamine levels in healthy control had not been demonstrated. The purpose of our study was to investigate this link. Using [11C]raclopride Positron Emission Tomography we tested the relationship between D2 receptor binding in the striatum and LOC (measured with the Rotter Locus of Control scale) in 15 healthy volunteers. Our results show a large and positive correlation: increased striatal D2 binding was associated with External LOC. This finding opens promising avenues for the study of several psychological impairments that have been associated with both dopamine and LOC, such as addiction, schizophrenia, and depression.


Subject(s)
Carbon Radioisotopes/metabolism , Corpus Striatum/metabolism , Positron-Emission Tomography/methods , Raclopride/metabolism , Receptors, Dopamine D2/metabolism , Adult , Corpus Striatum/diagnostic imaging , Dopamine Antagonists/metabolism , Female , Humans , Male , Protein Binding/physiology , Young Adult
14.
Psychophysiology ; 56(10): e13417, 2019 10.
Article in English | MEDLINE | ID: mdl-31175676

ABSTRACT

Based on reward and difficulty information, people can strategically adjust proactive cognitive control. fMRI research shows that motivated proactive control is implemented through fronto-parietal control networks that are triggered by reward and difficulty cues. Here, we investigate electrophysiological signatures of proactive control. Previously, the contingent negative variation (CNV) in the ERPs and oscillatory power in the theta (4-8 Hz) and alpha band (8-14 Hz) have been suggested as signatures of control implementation. However, experimental designs did not always separate control implementation from motor preparation. Critically, we used a mental calculation task to investigate effects of proactive control implementation on the CNV and on theta and alpha power, in absence of motor preparation. In the period leading up to task onset, we found a more negative CNV, increased theta power, and decreased alpha power for hard versus easy calculations, showing increased proactive control implementation when a difficult task was expected. These three measures also correlated with behavioral performance, both across trials and across subjects. In addition to scalp EEG in healthy participants, we collected intracranial local field potential recordings in an epilepsy patient. We observed a slow-drift component that was more pronounced for hard trials in a hippocampal location, possibly reflecting task-specific preparation for hard mental calculations. The current study thus shows that difficulty information triggers proactive control in absence of motor preparation and elucidates its neurophysiological signatures.


Subject(s)
Anticipation, Psychological/physiology , Cognition/physiology , Evoked Potentials/physiology , Scalp/physiology , Adult , Brain/diagnostic imaging , Brain/physiopathology , Contingent Negative Variation/physiology , Epilepsy/diagnostic imaging , Epilepsy/physiopathology , Female , Hippocampus/physiopathology , Humans , Magnetic Resonance Imaging , Male , Neuroimaging , Young Adult
16.
Cogn Affect Behav Neurosci ; 19(3): 619-636, 2019 06.
Article in English | MEDLINE | ID: mdl-30607834

ABSTRACT

Efficient integration of environmental information is critical in goal-directed behavior. Motivational information regarding potential rewards and costs (such as required effort) affects performance and decisions whether to engage in a task. While it is generally acknowledged that costs and benefits are integrated to determine the level of effort to be exerted, how this integration occurs remains an open question. Computational models of high-level cognition postulate serial processing of task-relevant features and demonstrate that prioritizing the processing of one feature over the other can affect performance. We investigated the hypothesis that motivationally relevant task features also may be processed serially, that people may prioritize either benefit or cost information, and that artificially controlling prioritization may be beneficial for performance (by improving task-accuracy) and decision-making (by boosting the willingness to engage in effortful trials). We manipulated prioritization by altering order of presentation of effort and reward cues in two experiments involving preparation for effortful performance and effort-based decision-making. We simulated the tasks with a recent model of prefrontal cortex (Alexander & Brown in Neural Computation, 27(11), 2354-2410, 2015). Human behavior was in line with model predictions: prioritizing reward vs. effort differentially affected performance vs. decision. Prioritizing reward was beneficial for performance, showing striking increase in accuracy, especially when a large reward was offered for a difficult task. Counterintuitively (yet predicted by the model), prioritizing reward resulted in a blunted reward effect on decisions. Conversely, prioritizing effort increased reward impact on decision to engage. These results highlight the importance of controlling prioritization of motivational cues in neuroimaging studies.


Subject(s)
Decision Making , Models, Psychological , Motivation , Psychomotor Performance , Reward , Cues , Female , Humans , Male , Photic Stimulation , Young Adult
17.
Neuropsychologia ; 123: 106-115, 2019 02 04.
Article in English | MEDLINE | ID: mdl-29705065

ABSTRACT

Preparing for a mentally demanding task calls upon cognitive and motivational resources. The underlying neural implementation of these mechanisms is receiving growing attention because of its implications for professional, social, and medical contexts. While several fMRI studies converge in assigning a crucial role to a cortico-subcortical network including Anterior Cigulate Cortex (ACC) and striatum, the involvement of Dorsolateral Prefrontal Cortex (DLPFC) during mental effort anticipation has yet to be replicated. This study was designed to target DLPFC contribution to anticipation of a difficult task using functional Near Infrared Spectroscopy (fNIRS), as a more cost-effective tool measuring cortical hemodynamics. We adapted a validated mental effort task, where participants performed easy and difficult mental calculation, and measured DLPFC activity during the anticipation phase. As hypothesized, DLPFC activity increased during anticipation of a hard task as compared to an easy task. Besides replicating previous fMRI work, these results establish fNIRS as an effective tool to investigate cortical contributions to anticipation of effortful behavior. This is especially useful if one requires testing large samples (e.g., to target individual differences), populations with contraindication for functional MRI (e.g., infants or patients with metal implants), or subjects in more naturalistic environments (e.g., work or sport).


Subject(s)
Anticipation, Psychological/physiology , Prefrontal Cortex/physiology , Task Performance and Analysis , Adult , Brain Mapping , Female , Humans , Male , Mathematical Concepts , Spectroscopy, Near-Infrared , Young Adult
18.
PLoS Comput Biol ; 14(8): e1006370, 2018 08.
Article in English | MEDLINE | ID: mdl-30142152

ABSTRACT

Optimal decision-making is based on integrating information from several dimensions of decisional space (e.g., reward expectation, cost estimation, effort exertion). Despite considerable empirical and theoretical efforts, the computational and neural bases of such multidimensional integration have remained largely elusive. Here we propose that the current theoretical stalemate may be broken by considering the computational properties of a cortical-subcortical circuit involving the dorsal anterior cingulate cortex (dACC) and the brainstem neuromodulatory nuclei: ventral tegmental area (VTA) and locus coeruleus (LC). From this perspective, the dACC optimizes decisions about stimuli and actions, and using the same computational machinery, it also modulates cortical functions (meta-learning), via neuromodulatory control (VTA and LC). We implemented this theory in a novel neuro-computational model-the Reinforcement Meta Learner (RML). We outline how the RML captures critical empirical findings from an unprecedented range of theoretical domains, and parsimoniously integrates various previous proposals on dACC functioning.


Subject(s)
Decision Making/physiology , Gyrus Cinguli/physiology , Animals , Brain Mapping/methods , Brain Stem/physiology , Cognition/physiology , Computer Simulation , Humans , Learning/physiology , Locus Coeruleus/physiology , Models, Theoretical , Reinforcement, Psychology , Reward , Ventral Tegmental Area/physiology
19.
J Cogn Neurosci ; 30(8): 1061-1065, 2018 08.
Article in English | MEDLINE | ID: mdl-28562208

ABSTRACT

Sometime in the past two decades, neuroimaging and behavioral research converged on pFC as an important locus of cognitive control and decision-making, and that seems to be the last thing anyone has agreed on since. Every year sees an increase in the number of roles and functions attributed to distinct subregions within pFC, roles that may explain behavior and neural activity in one context but might fail to generalize across the many behaviors in which each region is implicated. Emblematic of this ongoing proliferation of functions is dorsal ACC (dACC). Novel tasks that activate dACC are followed by novel interpretations of dACC function, and each new interpretation adds to the number of functionally specific processes contained within the region. This state of affairs, a recurrent and persistent behavior followed by an illusory and transient relief, can be likened to behavioral pathology. In Journal of Cognitive Neuroscience, 29:10 we collect contributed articles that seek to move the conversation beyond specific functions of subregions of pFC, focusing instead on general roles that support pFC involvement in a wide variety of behaviors and across a variety of experimental paradigms.


Subject(s)
Decision Making/physiology , Gyrus Cinguli/physiology , Learning/physiology , Prefrontal Cortex/physiology , Humans , Models, Neurological , Neural Pathways/physiology
20.
J Cogn Neurosci ; 29(10): 1633-1645, 2017 Oct.
Article in English | MEDLINE | ID: mdl-28654358

ABSTRACT

Human behavior is strongly driven by the pursuit of rewards. In daily life, however, benefits mostly come at a cost, often requiring that effort be exerted to obtain potential benefits. Medial PFC (MPFC) and dorsolateral PFC (DLPFC) are frequently implicated in the expectation of effortful control, showing increased activity as a function of predicted task difficulty. Such activity partially overlaps with expectation of reward and has been observed both during decision-making and during task preparation. Recently, novel computational frameworks have been developed to explain activity in these regions during cognitive control, based on the principle of prediction and prediction error (predicted response-outcome [PRO] model [Alexander, W. H., & Brown, J. W. Medial prefrontal cortex as an action-outcome predictor. Nature Neuroscience, 14, 1338-1344, 2011], hierarchical error representation [HER] model [Alexander, W. H., & Brown, J. W. Hierarchical error representation: A computational model of anterior cingulate and dorsolateral prefrontal cortex. Neural Computation, 27, 2354-2410, 2015]). Despite the broad explanatory power of these models, it is not clear whether they can also accommodate effects related to the expectation of effort observed in MPFC and DLPFC. Here, we propose a translation of these computational frameworks to the domain of effort-based behavior. First, we discuss how the PRO model, based on prediction error, can explain effort-related activity in MPFC, by reframing effort-based behavior in a predictive context. We propose that MPFC activity reflects monitoring of motivationally relevant variables (such as effort and reward), by coding expectations and discrepancies from such expectations. Moreover, we derive behavioral and neural model-based predictions for healthy controls and clinical populations with impairments of motivation. Second, we illustrate the possible translation to effort-based behavior of the HER model, an extended version of PRO model based on hierarchical error prediction, developed to explain MPFC-DLPFC interactions. We derive behavioral predictions that describe how effort and reward information is coded in PFC and how changing the configuration of such environmental information might affect decision-making and task performance involving motivation.


Subject(s)
Computer Simulation , Decision Making/physiology , Models, Neurological , Motivation/physiology , Prefrontal Cortex/physiology , Prefrontal Cortex/physiopathology , Humans , Neural Pathways/physiology , Neural Pathways/physiopathology
SELECTION OF CITATIONS
SEARCH DETAIL
...