Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 10 de 10
Filter
Add more filters










Publication year range
1.
Elife ; 112022 11 14.
Article in English | MEDLINE | ID: mdl-36373657

ABSTRACT

How dynamic interactions between nervous system regions in mammals performs online motor control remains an unsolved problem. In this paper, we show that feedback control is a simple, yet powerful way to understand the neural dynamics of sensorimotor control. We make our case using a minimal model comprising spinal cord, sensory and motor cortex, coupled by long connections that are plastic. It succeeds in learning how to perform reaching movements of a planar arm with 6 muscles in several directions from scratch. The model satisfies biological plausibility constraints, like neural implementation, transmission delays, local synaptic learning and continuous online learning. Using differential Hebbian plasticity the model can go from motor babbling to reaching arbitrary targets in less than 10 min of in silico time. Moreover, independently of the learning mechanism, properly configured feedback control has many emergent properties: neural populations in motor cortex show directional tuning and oscillatory dynamics, the spinal cord creates convergent force fields that add linearly, and movements are ataxic (as in a motor system without a cerebellum).


Subject(s)
Models, Neurological , Movement , Animals , Feedback , Movement/physiology , Learning/physiology , Cerebellum/physiology , Mammals
2.
Neural Netw ; 150: 237-258, 2022 Jun.
Article in English | MEDLINE | ID: mdl-35325677

ABSTRACT

In this paper we explore a neural control architecture that is both biologically plausible, and capable of fully autonomous learning. It consists of feedback controllers that learn to achieve a desired state by selecting the errors that should drive them. This selection happens through a family of differential Hebbian learning rules that, through interaction with the environment, can learn to control systems where the error responds monotonically to the control signal. We next show that in a more general case, neural reinforcement learning can be coupled with a feedback controller to reduce errors that arise non-monotonically from the control signal. The use of feedback control can reduce the complexity of the reinforcement learning problem, because only a desired value must be learned, with the controller handling the details of how it is reached. This makes the function to be learned simpler, potentially allowing learning of more complex actions. We use simple examples to illustrate our approach, and discuss how it could be extended to hierarchical architectures.


Subject(s)
Models, Neurological , Neural Networks, Computer , Feedback , Learning , Reinforcement, Psychology
3.
Front Neuroinform ; 13: 18, 2019.
Article in English | MEDLINE | ID: mdl-31001101

ABSTRACT

Draculab is a neural simulator with a particular use scenario: firing rate units with delayed connections, using custom-made unit and synapse models, possibly controlling simulated physical systems. Draculab also has a particular design philosophy. It aims to blur the line between users and developers. Three factors help to achieve this: a simple design using Python's data structures, extensive use of standard libraries, and profusely commented source code. This paper is an introduction to Draculab's architecture and philosophy. After presenting some example networks it explains basic algorithms and data structures that constitute the essence of this approach. The relation with other simulators is discussed, as well as the reasons why connection delays and interaction with simulated physical systems are emphasized.

4.
J Math Neurosci ; 5(1): 25, 2015 Dec.
Article in English | MEDLINE | ID: mdl-26084702

ABSTRACT

Simple-spike synchrony between Purkinje cells projecting to a common neuron in the deep cerebellar nucleus is emerging as an important factor in the encoding of output information from cerebellar cortex. A phenomenon known as stochastic synchronization happens when uncoupled oscillators synchronize due to correlated inputs. Stochastic synchronization is a viable mechanism through which simple-spike synchrony could be generated, but it has received scarce attention, perhaps because the presence of feedforward inhibition in the input to Purkinje cells makes insights difficult. This paper presents a method to account for feedforward inhibition so the usual mathematical approaches to stochastic synchronization can be applied. The method consists in finding a single Phase Response Curve, called the equivalent PRC, that accounts for the effects of both excitatory inputs and delayed feedforward inhibition from molecular layer interneurons. The results suggest that a theory of stochastic synchronization for the case of feedforward inhibition may not be necessary, since this case can be approximately reduced to the case of inputs characterized by a single PRC. Moreover, feedforward inhibition could in many situations increase the level of synchrony experienced by Purkinje cells.

5.
Article in English | MEDLINE | ID: mdl-25852535

ABSTRACT

We present a cerebellar architecture with two main characteristics. The first one is that complex spikes respond to increases in sensory errors. The second one is that cerebellar modules associate particular contexts where errors have increased in the past with corrective commands that stop the increase in error. We analyze our architecture formally and computationally for the case of reaching in a 3D environment. In the case of motor control, we show that there are synergies of this architecture with the Equilibrium-Point hypothesis, leading to novel ways to solve the motor error and distal learning problems. In particular, the presence of desired equilibrium lengths for muscles provides a way to know when the error is increasing, and which corrections to apply. In the context of Threshold Control Theory and Perceptual Control Theory we show how to extend our model so it implements anticipative corrections in cascade control systems that span from muscle contractions to cognitive operations.

6.
Chaos ; 25(1): 013116, 2015 Jan.
Article in English | MEDLINE | ID: mdl-25637927

ABSTRACT

Many natural systems are organized as networks, in which the nodes (be they cells, individuals or populations) interact in a time-dependent fashion. The dynamic behavior of these networks depends on how these nodes are connected, which can be understood in terms of an adjacency matrix and connection strengths. The object of our study is to relate connectivity to temporal behavior in networks of coupled nonlinear oscillators. We investigate the relationship between classes of system architectures and classes of their possible dynamics, when the nodes are coupled according to a connectivity scheme that obeys certain constrains, but also incorporates random aspects. We illustrate how the phase space dynamics and bifurcations of the system change when perturbing the underlying adjacency graph. We differentiate between the effects on dynamics of the following operations that directly modulate network connectivity: (1) increasing/decreasing edge weights, (2) increasing/decreasing edge density, (3) altering edge configuration by adding, deleting, or moving edges. We discuss the significance of our results in the context of real life networks. Some interpretations lead us to draw conclusions that may apply to brain networks, synaptic restructuring, and neural dynamics.


Subject(s)
Nonlinear Dynamics
7.
J Comput Neurosci ; 32(3): 403-23, 2012 Jun.
Article in English | MEDLINE | ID: mdl-21887499

ABSTRACT

Temporal patterns of activity which repeat above chance level in the brains of vertebrates and in the mammalian neocortex have been reported experimentally. This temporal structure is thought to subserve functions such as movement, speech, and generation of rhythms. Several studies aim to explain how particular sequences of activity are learned, stored, and reproduced. The learning of sequences is usually conceived as the creation of an excitation pathway within a homogeneous neuronal population, but models embodying the autonomous function of such a learning mechanism are fraught with concerns about stability, robustness, and biological plausibility. We present two related computational models capable of learning and reproducing sequences which come from external stimuli. Both models assume that there exist populations of densely interconnected excitatory neurons, and that plasticity can occur at the population level. The first model uses temporally asymmetric Hebbian plasticity to create excitation pathways between populations in response to activation from an external source. The transition of the activity from one population to the next is permitted by the interplay of excitatory and inhibitory populations, which results in oscillatory behavior that seems to agree with experimental findings in the mammalian neocortex. The second model contains two layers, each one like the network used in the first model, with unidirectional excitatory connections from the first to the second layer experiencing Hebbian plasticity. Input sequences presented in the second layer become associated with the ongoing first layer activity, so that this activity can later elicit the the presented sequence in the absence of input. We explore the dynamics of these models, and discuss their potential implications, particularly to working memory, oscillations, and rhythm generation.


Subject(s)
Models, Neurological , Neural Networks, Computer , Neurons/physiology , Serial Learning/physiology , Action Potentials/physiology , Animals , Computer Simulation , Humans , Mental Recall/physiology , Neocortex/cytology , Neural Pathways , Neuronal Plasticity/physiology
8.
Neural Netw ; 27: 21-31, 2012 Mar.
Article in English | MEDLINE | ID: mdl-22112921

ABSTRACT

Recurrent networks of cortico-cortical connections have been implicated as the substrate of working memory persistent activity, and patterned sequenced representation as needed in cognitive function. We examine the pathological behavior which may result from specific changes in the normal parameters or architecture in a biologically plausible computational working memory model capable of learning and reproducing sequences which come from external stimuli. Specifically, we examine systematical reductions in network inhibition, excitatory potentiation, delays in excitatory connections, and heterosynaptic plasticity. We show that these changes result in a set of dynamics which may be associated with cognitive symptoms associated with different neuropathologies, particularly epilepsy, schizophrenia, and obsessive compulsive disorders. We demonstrate how cognitive symptoms in these disorders may arise from similar or the same general mechanisms acting in the recurrent working memory networks. We suggest that these pathological dynamics may form a set overlapping states within the normal network function, and relate this to observed associations between different pathologies.


Subject(s)
Brain/physiopathology , Epilepsy/physiopathology , Memory, Short-Term/physiology , Nerve Net/physiopathology , Neurons/physiology , Obsessive-Compulsive Disorder/physiopathology , Schizophrenia/physiopathology , Humans , Models, Neurological , Neural Networks, Computer , Synapses/physiology
9.
PLoS One ; 4(8): e6399, 2009 Aug 04.
Article in English | MEDLINE | ID: mdl-19652716

ABSTRACT

Neurons in the cortex exhibit a number of patterns that correlate with working memory. Specifically, averaged across trials of working memory tasks, neurons exhibit different firing rate patterns during the delay of those tasks. These patterns include: 1) persistent fixed-frequency elevated rates above baseline, 2) elevated rates that decay throughout the tasks memory period, 3) rates that accelerate throughout the delay, and 4) patterns of inhibited firing (below baseline) analogous to each of the preceding excitatory patterns. Persistent elevated rate patterns are believed to be the neural correlate of working memory retention and preparation for execution of behavioral/motor responses as required in working memory tasks. Models have proposed that such activity corresponds to stable attractors in cortical neural networks with fixed synaptic weights. However, the variability in patterned behavior and the firing statistics of real neurons across the entire range of those behaviors across and within trials of working memory tasks are typical not reproduced. Here we examine the effect of dynamic synapses and network architectures with multiple cortical areas on the states and dynamics of working memory networks. The analysis indicates that the multiple pattern types exhibited by cells in working memory networks are inherent in networks with dynamic synapses, and that the variability and firing statistics in such networks with distributed architectures agree with that observed in the cortex.


Subject(s)
Neurons/physiology , Synapses/physiology , Cerebral Cortex/physiology , Humans , Models, Biological
10.
Chaos ; 19(1): 015115, 2009 Mar.
Article in English | MEDLINE | ID: mdl-19335019

ABSTRACT

Persistent states are believed to be the correlate for short-term or working memory. Using a previously derived model for working memory, we show that disruption of the lateral inhibition can lead to a variety of pathological states. These states are analogs of reflex or pattern-sensitive epilepsy. Simulations, numerical bifurcation analysis, and fast-slow decomposition are used to explore the dynamics of this network.


Subject(s)
Action Potentials/physiology , Memory , Nerve Net/physiology , Neurons/cytology , Algorithms , Animals , Biophysics/methods , Epilepsy , Humans , Models, Biological , Neural Inhibition/physiology , Neurons/metabolism , Neurons/physiology , Nonlinear Dynamics , Seizures/physiopathology , Time Factors
SELECTION OF CITATIONS
SEARCH DETAIL
...