Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 5 de 5
Filter
Add more filters










Database
Language
Publication year range
1.
Neural Comput ; 31(10): 1945-1963, 2019 10.
Article in English | MEDLINE | ID: mdl-31393824

ABSTRACT

Even highly trained behaviors demonstrate variability, which is correlated with performance on current and future tasks. An objective of motor learning that is general enough to explain these phenomena has not been precisely formulated. In this six-week longitudinal learning study, participants practiced a set of motor sequences each day, and neuroimaging data were collected on days 1, 14, 28, and 42 to capture the neural correlates of the learning process. In our analysis, we first modeled the underlying neural and behavioral dynamics during learning. Our results demonstrate that the densities of whole-brain response, task-active regional response, and behavioral performance evolve according to a Fokker-Planck equation during the acquisition of a motor skill. We show that this implies that the brain concurrently optimizes the entropy of a joint density over neural response and behavior (as measured by sampling over multiple trials and subjects) and the expected performance under this density; we call this formulation of learning minimum free energy learning (MFEL). This model provides an explanation as to how behavioral variability can be tuned while simultaneously improving performance during learning. We then develop a novel variant of inverse reinforcement learning to retrieve the cost function optimized by the brain during the learning process, as well as the parameter used to tune variability. We show that this population-level analysis can be used to derive a learning objective that each subject optimizes during his or her study. In this way, MFEL effectively acts as a unifying principle, allowing users to precisely formulate learning objectives and infer their structure.


Subject(s)
Brain/physiology , Entropy , Learning/physiology , Models, Neurological , Motor Skills/physiology , Female , Humans , Male , Young Adult
2.
Sci Rep ; 8(1): 10721, 2018 07 16.
Article in English | MEDLINE | ID: mdl-30013195

ABSTRACT

Recent improvements in hardware and data collection have lowered the barrier to practical neural control. Most of the current contributions to the field have focus on model-based control, however, models of neural systems are quite complex and difficult to design. To circumvent these issues, we adapt a model-free method from the reinforcement learning literature, Deep Deterministic Policy Gradients (DDPG). Model-free reinforcement learning presents an attractive framework because of the flexibility it offers, allowing the user to avoid modeling system dynamics. We make use of this feature by applying DDPG to models of low-level and high-level neural dynamics. We show that while model-free, DDPG is able to solve more difficult problems than can be solved by current methods. These problems include the induction of global synchrony by entrainment of weakly coupled oscillators and the control of trajectories through a latent phase space of an underactuated network of neurons. While this work has been performed on simulated systems, it suggests that advances in modern reinforcement learning may enable the solution of fundamental problems in neural control and movement towards more complex objectives in real systems.

3.
IET Syst Biol ; 6(4): 102-15, 2012 Aug.
Article in English | MEDLINE | ID: mdl-23039691

ABSTRACT

The linear noise approximation (LNA) is a way of approximating the stochastic time evolution of a well-stirred chemically reacting system. It can be obtained either as the lowest order correction to the deterministic chemical reaction rate equation (RRE) in van Kampen's system-size expansion of the chemical master equation (CME), or by linearising the two-term-truncated chemical Kramers-Moyal equation. However, neither of those derivations sheds much light on the validity of the LNA. The problematic character of the system-size expansion of the CME for some chemical systems, the arbitrariness of truncating the chemical Kramers-Moyal equation at two terms, and the sometimes poor agreement of the LNA with the solution of the CME, have all raised concerns about the validity and usefulness of the LNA. Here, the authors argue that these concerns can be resolved by viewing the LNA as an approximation of the chemical Langevin equation (CLE). This view is already implicit in Gardiner's derivation of the LNA from the truncated Kramers-Moyal equation, as that equation is mathematically equivalent to the CLE. However, the CLE can be more convincingly derived in a way that does not involve either the truncated Kramers-Moyal equation or the system-size expansion. This derivation shows that the CLE will be valid, at least for a limited span of time, for any system that is sufficiently close to the thermodynamic (large-system) limit. The relatively easy derivation of the LNA from the CLE shows that the LNA shares the CLE's conditions of validity, and it also suggests that what the LNA really gives us is a description of the initial departure of the CLE from the RRE as we back away from the thermodynamic limit to a large but finite system. The authors show that this approach to the LNA simplifies its derivation, clarifies its limitations, and affords an easier path to its solution.


Subject(s)
Algorithms , Computer Simulation , Linear Models , Models, Chemical
4.
IET Syst Biol ; 5(1): 58, 2011 Jan.
Article in English | MEDLINE | ID: mdl-21261403

ABSTRACT

Michaelis-Menten kinetics are commonly used to represent enzyme-catalysed reactions in biochemical models. The Michaelis-Menten approximation has been thoroughly studied in the context of traditional differential equation models. The presence of small concentrations in biochemical systems, however, encourages the conversion to a discrete stochastic representation. It is shown that the Michaelis-Menten approximation is applicable in discrete stochastic models and that the validity conditions are the same as in the deterministic regime. The authors then compare the Michaelis-Menten approximation to a procedure called the slow-scale stochastic simulation algorithm (ssSSA). The theory underlying the ssSSA implies a formula that seems in some cases to be different from the well-known Michaelis-Menten formula. Here those differences are examined, and some special cases of the stochastic formulas are confirmed using a first-passage time analysis. This exercise serves to place the conventional Michaelis-Menten formula in a broader rigorous theoretical framework.


Subject(s)
Models, Chemical , Stochastic Processes , Algorithms , Enzymes/metabolism , Kinetics , Models, Theoretical
SELECTION OF CITATIONS
SEARCH DETAIL
...