Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 6 de 6
Filter
Add more filters










Database
Language
Publication year range
1.
Psychol Med ; 53(5): 1850-1859, 2023 04.
Article in English | MEDLINE | ID: mdl-37310334

ABSTRACT

BACKGROUND: Apathy, a disabling and poorly understood neuropsychiatric symptom, is characterised by impaired self-initiated behaviour. It has been hypothesised that the opportunity cost of time (OCT) may be a key computational variable linking self-initiated behaviour with motivational status. OCT represents the amount of reward which is foregone per second if no action is taken. Using a novel behavioural task and computational modelling, we investigated the relationship between OCT, self-initiation and apathy. We predicted that higher OCT would engender shorter action latencies, and that individuals with greater sensitivity to OCT would have higher behavioural apathy. METHODS: We modulated the OCT in a novel task called the 'Fisherman Game', Participants freely chose when to self-initiate actions to either collect rewards, or on occasion, to complete non-rewarding actions. We measured the relationship between action latencies, OCT and apathy for each participant across two independent non-clinical studies, one under laboratory conditions (n = 21) and one online (n = 90). 'Average-reward' reinforcement learning was used to model our data. We replicated our findings across both studies. RESULTS: We show that the latency of self-initiation is driven by changes in the OCT. Furthermore, we demonstrate, for the first time, that participants with higher apathy showed greater sensitivity to changes in OCT in younger adults. Our model shows that apathetic individuals experienced greatest change in subjective OCT during our task as a consequence of being more sensitive to rewards. CONCLUSIONS: Our results suggest that OCT is an important variable for determining free-operant action initiation and understanding apathy.


Subject(s)
Apathy , Adult , Humans , Cognition , Computer Simulation , Motivation , Reinforcement, Psychology
2.
PLoS Comput Biol ; 15(6): e1007093, 2019 06.
Article in English | MEDLINE | ID: mdl-31233559

ABSTRACT

Humans and other animals are able to discover underlying statistical structure in their environments and exploit it to achieve efficient and effective performance. However, such structure is often difficult to learn and use because it is obscure, involving long-range temporal dependencies. Here, we analysed behavioural data from an extended experiment with rats, showing that the subjects learned the underlying statistical structure, albeit suffering at times from immediate inferential imperfections as to their current state within it. We accounted for their behaviour using a Hidden Markov Model, in which recent observations are integrated with evidence from the past. We found that over the course of training, subjects came to track their progress through the task more accurately, a change that our model largely attributed to improved integration of past evidence. This learning reflected the structure of the task, decreasing reliance on recent observations, which were potentially misleading.


Subject(s)
Models, Biological , Reward , Spatial Learning/physiology , Animals , Behavior, Animal/physiology , Computational Biology , Rats , Task Performance and Analysis
3.
PLoS Comput Biol ; 10(12): e1003894, 2014 Dec.
Article in English | MEDLINE | ID: mdl-25474151

ABSTRACT

Given the option, humans and other animals elect to distribute their time between work and leisure, rather than choosing all of one and none of the other. Traditional accounts of partial allocation have characterised behavior on a macroscopic timescale, reporting and studying the mean times spent in work or leisure. However, averaging over the more microscopic processes that govern choices is known to pose tricky theoretical problems, and also eschews any possibility of direct contact with the neural computations involved. We develop a microscopic framework, formalized as a semi-Markov decision process with possibly stochastic choices, in which subjects approximately maximise their expected returns by making momentary commitments to one or other activity. We show macroscopic utilities that arise from microscopic ones, and demonstrate how facets such as imperfect substitutability can arise in a more straightforward microscopic manner.


Subject(s)
Decision Making , Leisure Activities , Models, Biological , Work , Algorithms , Animals , Computational Biology , Humans , Nonlinear Dynamics , Stochastic Processes
4.
J R Soc Interface ; 11(91): 20130969, 2014 Feb 06.
Article in English | MEDLINE | ID: mdl-24284898

ABSTRACT

Dividing limited time between work and leisure when both have their attractions is a common everyday decision. We provide a normative control-theoretic treatment of this decision that bridges economic and psychological accounts. We show how our framework applies to free-operant behavioural experiments in which subjects are required to work (depressing a lever) for sufficient total time (called the price) to receive a reward. When the microscopic benefit-of-leisure increases nonlinearly with duration, the model generates behaviour that qualitatively matches various microfeatures of subjects' choices, including the distribution of leisure bout durations as a function of the pay-off. We relate our model to traditional accounts by deriving macroscopic, molar, quantities from microscopic choices.


Subject(s)
Behavior , Reinforcement, Psychology , Algorithms , Animals , Brain/physiology , Decision Making , Humans , Learning , Leisure Activities , Markov Chains , Models, Theoretical , Probability , Reward , Stochastic Processes , Time Factors
5.
PLoS Comput Biol ; 9(6): e1003099, 2013.
Article in English | MEDLINE | ID: mdl-23825935

ABSTRACT

Behavioural and neurophysiological studies in primates have increasingly shown the involvement of urgency signals during the temporal integration of sensory evidence in perceptual decision-making. Neuronal correlates of such signals have been found in the parietal cortex, and in separate studies, demonstrated attention-induced gain modulation of both excitatory and inhibitory neurons. Although previous computational models of decision-making have incorporated gain modulation, their abstract forms do not permit an understanding of the contribution of inhibitory gain modulation. Thus, the effects of co-modulating both excitatory and inhibitory neuronal gains on decision-making dynamics and behavioural performance remain unclear. In this work, we incorporate time-dependent co-modulation of the gains of both excitatory and inhibitory neurons into our previous biologically based decision circuit model. We base our computational study in the context of two classic motion-discrimination tasks performed in animals. Our model shows that by simultaneously increasing the gains of both excitatory and inhibitory neurons, a variety of the observed dynamic neuronal firing activities can be replicated. In particular, the model can exhibit winner-take-all decision-making behaviour with higher firing rates and within a significantly more robust model parameter range. It also exhibits short-tailed reaction time distributions even when operating near a dynamical bifurcation point. The model further shows that neuronal gain modulation can compensate for weaker recurrent excitation in a decision neural circuit, and support decision formation and storage. Higher neuronal gain is also suggested in the more cognitively demanding reaction time than in the fixed delay version of the task. Using the exact temporal delays from the animal experiments, fast recruitment of gain co-modulation is shown to maximize reward rate, with a timescale that is surprisingly near the experimentally fitted value. Our work provides insights into the simultaneous and rapid modulation of excitatory and inhibitory neuronal gains, which enables flexible, robust, and optimal decision-making.


Subject(s)
Decision Making , Models, Theoretical , Humans , Reaction Time , Task Performance and Analysis
6.
Phys Rev E Stat Nonlin Soft Matter Phys ; 80(6 Pt 2): 066213, 2009 Dec.
Article in English | MEDLINE | ID: mdl-20365260

ABSTRACT

We investigate the role of the learning rate in a Kuramoto Model of coupled phase oscillators in which the coupling coefficients dynamically vary according to a Hebbian learning rule. According to the Hebbian theory, a synapse between two neurons is strengthened if they are simultaneously coactive. Two stable synchronized clusters in antiphase emerge when the learning rate is larger than a critical value. In such a fast learning scenario, the network eventually constructs itself into an all-to-all coupled structure, regardless of initial conditions in connectivity. In contrast, when learning is slower than this critical value, only a single synchronized cluster can develop. Extending our analysis, we explore whether self-development of neuronal networks can be achieved through an interaction between spontaneous neural synchronization and Hebbian learning. We find that self-development of such neural systems is impossible if learning is too slow. Finally, we demonstrate that similar to the acquisition and consolidation of long-term memory, this network is capable of generating and remembering stable patterns.


Subject(s)
Biophysics/methods , Oscillometry/methods , Algorithms , Cluster Analysis , Computer Simulation , Humans , Learning/physiology , Memory , Models, Biological , Models, Statistical , Models, Theoretical , Nerve Net , Neurons/metabolism , Normal Distribution
SELECTION OF CITATIONS
SEARCH DETAIL
...