Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 14 de 14
Filter
Add more filters










Publication year range
1.
Sci Robot ; 6(58): eabf2756, 2021 Sep 08.
Article in English | MEDLINE | ID: mdl-34516748

ABSTRACT

The presence of computation and transmission-variable time delays within a robotic control loop is a major cause of instability, hindering safe human-robot interaction (HRI) under these circumstances. Classical control theory has been adapted to counteract the presence of such variable delays; however, the solutions provided to date cannot cope with HRI robotics inherent features. The highly nonlinear dynamics of HRI cobots (robots intended for human interaction in collaborative tasks), together with the growing use of flexible joints and elastic materials providing passive compliance, prevent traditional control solutions from being applied. Conversely, human motor control natively deals with low power actuators, nonlinear dynamics, and variable transmission time delays. The cerebellum, pivotal to human motor control, is able to predict motor commands by correlating current and past sensorimotor signals, and to ultimately compensate for the existing sensorimotor human delay (tens of milliseconds). This work aims at bridging those inherent features of cerebellar motor control and current robotic challenges­namely, compliant control in the presence of variable sensorimotor delays. We implement a cerebellar-like spiking neural network (SNN) controller that is adaptive, compliant, and robust to variable sensorimotor delays by replicating the cerebellar mechanisms that embrace the presence of biological delays and allow motor learning and adaptation.


Subject(s)
Cerebellum/physiology , Robotics , Adaptation, Physiological , Equipment Design , Internet , Learning , Man-Machine Systems , Models, Neurological , Motor Skills , Movement , Neural Networks, Computer , Nonlinear Dynamics , Spain , Torque , User-Computer Interface
2.
Front Neuroinform ; 15: 663797, 2021.
Article in English | MEDLINE | ID: mdl-34149387

ABSTRACT

This article extends a recent methodological workflow for creating realistic and computationally efficient neuron models whilst capturing essential aspects of single-neuron dynamics. We overcome the intrinsic limitations of the extant optimization methods by proposing an alternative optimization component based on multimodal algorithms. This approach can natively explore a diverse population of neuron model configurations. In contrast to methods that focus on a single global optimum, the multimodal method allows directly obtaining a set of promising solutions for a single but complex multi-feature objective function. The final sparse population of candidate solutions has to be analyzed and evaluated according to the biological plausibility and their objective to the target features by the expert. In order to illustrate the value of this approach, we base our proposal on the optimization of cerebellar granule cell (GrC) models that replicate the essential properties of the biological cell. Our results show the emerging variability of plausible sets of values that this type of neuron can adopt underlying complex spiking characteristics. Also, the set of selected cerebellar GrC models captured spiking dynamics closer to the reference model than the single model obtained with off-the-shelf parameter optimization algorithms used in our previous article. The method hereby proposed represents a valuable strategy for adjusting a varied population of realistic and simplified neuron models. It can be applied to other kinds of neuron models and biological contexts.

3.
PLoS Comput Biol ; 15(3): e1006298, 2019 03.
Article in English | MEDLINE | ID: mdl-30860991

ABSTRACT

Cerebellar Purkinje cells mediate accurate eye movement coordination. However, it remains unclear how oculomotor adaptation depends on the interplay between the characteristic Purkinje cell response patterns, namely tonic, bursting, and spike pauses. Here, a spiking cerebellar model assesses the role of Purkinje cell firing patterns in vestibular ocular reflex (VOR) adaptation. The model captures the cerebellar microcircuit properties and it incorporates spike-based synaptic plasticity at multiple cerebellar sites. A detailed Purkinje cell model reproduces the three spike-firing patterns that are shown to regulate the cerebellar output. Our results suggest that pauses following Purkinje complex spikes (bursts) encode transient disinhibition of target medial vestibular nuclei, critically gating the vestibular signals conveyed by mossy fibres. This gating mechanism accounts for early and coarse VOR acquisition, prior to the late reflex consolidation. In addition, properly timed and sized Purkinje cell bursts allow the ratio between long-term depression and potentiation (LTD/LTP) to be finely shaped at mossy fibre-medial vestibular nuclei synapses, which optimises VOR consolidation. Tonic Purkinje cell firing maintains the consolidated VOR through time. Importantly, pauses are crucial to facilitate VOR phase-reversal learning, by reshaping previously learnt synaptic weight distributions. Altogether, these results predict that Purkinje spike burst-pause dynamics are instrumental to VOR learning and reversal adaptation.


Subject(s)
Action Potentials , Adaptation, Physiological , Purkinje Cells/physiology , Animals , Eye Movements , Humans , Learning , Long-Term Potentiation , Reflex, Vestibulo-Ocular/physiology , Synapses/physiology
4.
Front Neuroinform ; 12: 24, 2018.
Article in English | MEDLINE | ID: mdl-29755335

ABSTRACT

[This corrects the article on p. 7 in vol. 11, PMID: 28223930.].

5.
Front Neurosci ; 12: 913, 2018.
Article in English | MEDLINE | ID: mdl-30618549

ABSTRACT

Supervised learning has long been attributed to several feed-forward neural circuits within the brain, with particular attention being paid to the cerebellar granular layer. The focus of this study is to evaluate the input activity representation of these feed-forward neural networks. The activity of cerebellar granule cells is conveyed by parallel fibers and translated into Purkinje cell activity, which constitutes the sole output of the cerebellar cortex. The learning process at this parallel-fiber-to-Purkinje-cell connection makes each Purkinje cell sensitive to a set of specific cerebellar states, which are roughly determined by the granule-cell activity during a certain time window. A Purkinje cell becomes sensitive to each neural input state and, consequently, the network operates as a function able to generate a desired output for each provided input by means of supervised learning. However, not all sets of Purkinje cell responses can be assigned to any set of input states due to the network's own limitations (inherent to the network neurobiological substrate), that is, not all input-output mapping can be learned. A key limiting factor is the representation of the input states through granule-cell activity. The quality of this representation (e.g., in terms of heterogeneity) will determine the capacity of the network to learn a varied set of outputs. Assessing the quality of this representation is interesting when developing and studying models of these networks to identify those neuron or network characteristics that enhance this representation. In this study we present an algorithm for evaluating quantitatively the level of compatibility/interference amongst a set of given cerebellar states according to their representation (granule-cell activation patterns) without the need for actually conducting simulations and network training. The algorithm input consists of a real-number matrix that codifies the activity level of every considered granule-cell in each state. The capability of this representation to generate a varied set of outputs is evaluated geometrically, thus resulting in a real number that assesses the goodness of the representation.

6.
Front Neuroinform ; 11: 7, 2017.
Article in English | MEDLINE | ID: mdl-28223930

ABSTRACT

Modeling and simulating the neural structures which make up our central neural system is instrumental for deciphering the computational neural cues beneath. Higher levels of biological plausibility usually impose higher levels of complexity in mathematical modeling, from neural to behavioral levels. This paper focuses on overcoming the simulation problems (accuracy and performance) derived from using higher levels of mathematical complexity at a neural level. This study proposes different techniques for simulating neural models that hold incremental levels of mathematical complexity: leaky integrate-and-fire (LIF), adaptive exponential integrate-and-fire (AdEx), and Hodgkin-Huxley (HH) neural models (ranged from low to high neural complexity). The studied techniques are classified into two main families depending on how the neural-model dynamic evaluation is computed: the event-driven or the time-driven families. Whilst event-driven techniques pre-compile and store the neural dynamics within look-up tables, time-driven techniques compute the neural dynamics iteratively during the simulation time. We propose two modifications for the event-driven family: a look-up table recombination to better cope with the incremental neural complexity together with a better handling of the synchronous input activity. Regarding the time-driven family, we propose a modification in computing the neural dynamics: the bi-fixed-step integration method. This method automatically adjusts the simulation step size to better cope with the stiffness of the neural model dynamics running in CPU platforms. One version of this method is also implemented for hybrid CPU-GPU platforms. Finally, we analyze how the performance and accuracy of these modifications evolve with increasing levels of neural complexity. We also demonstrate how the proposed modifications which constitute the main contribution of this study systematically outperform the traditional event- and time-driven techniques under increasing levels of neural complexity.

7.
Front Comput Neurosci ; 10: 17, 2016.
Article in English | MEDLINE | ID: mdl-26973504

ABSTRACT

Deep cerebellar nuclei neurons receive both inhibitory (GABAergic) synaptic currents from Purkinje cells (within the cerebellar cortex) and excitatory (glutamatergic) synaptic currents from mossy fibers. Those two deep cerebellar nucleus inputs are thought to be also adaptive, embedding interesting properties in the framework of accurate movements. We show that distributed spike-timing-dependent plasticity mechanisms (STDP) located at different cerebellar sites (parallel fibers to Purkinje cells, mossy fibers to deep cerebellar nucleus cells, and Purkinje cells to deep cerebellar nucleus cells) in close-loop simulations provide an explanation for the complex learning properties of the cerebellum in motor learning. Concretely, we propose a new mechanistic cerebellar spiking model. In this new model, deep cerebellar nuclei embed a dual functionality: deep cerebellar nuclei acting as a gain adaptation mechanism and as a facilitator for the slow memory consolidation at mossy fibers to deep cerebellar nucleus synapses. Equipping the cerebellum with excitatory (e-STDP) and inhibitory (i-STDP) mechanisms at deep cerebellar nuclei afferents allows the accommodation of synaptic memories that were formed at parallel fibers to Purkinje cells synapses and then transferred to mossy fibers to deep cerebellar nucleus synapses. These adaptive mechanisms also contribute to modulate the deep-cerebellar-nucleus-output firing rate (output gain modulation toward optimizing its working range).

8.
IEEE Trans Neural Netw Learn Syst ; 26(7): 1567-74, 2015 Jul.
Article in English | MEDLINE | ID: mdl-25167556

ABSTRACT

Time-driven simulation methods in traditional CPU architectures perform well and precisely when simulating small-scale spiking neural networks. Nevertheless, they still have drawbacks when simulating large-scale systems. Conversely, event-driven simulation methods in CPUs and time-driven simulation methods in graphic processing units (GPUs) can outperform CPU time-driven methods under certain conditions. With this performance improvement in mind, we have developed an event-and-time-driven spiking neural network simulator suitable for a hybrid CPU-GPU platform. Our neural simulator is able to efficiently simulate bio-inspired spiking neural networks consisting of different neural models, which can be distributed heterogeneously in both small layers and large layers or subsystems. For the sake of efficiency, the low-activity parts of the neural network can be simulated in CPU using event-driven methods while the high-activity subsystems can be simulated in either CPU (a few neurons) or GPU (thousands or millions of neurons) using time-driven methods. In this brief, we have undertaken a comparative study of these different simulation methods. For benchmarking the different simulation methods and platforms, we have used a cerebellar-inspired neural-network model consisting of a very dense granular layer and a Purkinje layer with a smaller number of cells (according to biological ratios). Thus, this cerebellar-like network includes a dense diverging neural layer (increasing the dimensionality of its internal representation and sparse coding) and a converging neural layer (integration) similar to many other biologically inspired and also artificial neural networks.


Subject(s)
Computer Simulation , Neural Networks, Computer , Algorithms , Benchmarking , Cerebellum/cytology , Cerebellum/physiology , Computer Graphics , Computers, Hybrid , Microcomputers , Nerve Fibers/physiology , Purkinje Cells/physiology , Reproducibility of Results
9.
PLoS One ; 9(11): e112265, 2014.
Article in English | MEDLINE | ID: mdl-25390365

ABSTRACT

The cerebellum is involved in a large number of different neural processes, especially in associative learning and in fine motor control. To develop a comprehensive theory of sensorimotor learning and control, it is crucial to determine the neural basis of coding and plasticity embedded into the cerebellar neural circuit and how they are translated into behavioral outcomes in learning paradigms. Learning has to be inferred from the interaction of an embodied system with its real environment, and the same cerebellar principles derived from cell physiology have to be able to drive a variety of tasks of different nature, calling for complex timing and movement patterns. We have coupled a realistic cerebellar spiking neural network (SNN) with a real robot and challenged it in multiple diverse sensorimotor tasks. Encoding and decoding strategies based on neuronal firing rates were applied. Adaptive motor control protocols with acquisition and extinction phases have been designed and tested, including an associative Pavlovian task (Eye blinking classical conditioning), a vestibulo-ocular task and a perturbed arm reaching task operating in closed-loop. The SNN processed in real-time mossy fiber inputs as arbitrary contextual signals, irrespective of whether they conveyed a tone, a vestibular stimulus or the position of a limb. A bidirectional long-term plasticity rule implemented at parallel fibers-Purkinje cell synapses modulated the output activity in the deep cerebellar nuclei. In all tasks, the neurorobot learned to adjust timing and gain of the motor responses by tuning its output discharge. It succeeded in reproducing how human biological systems acquire, extinguish and express knowledge of a noisy and changing world. By varying stimuli and perturbations patterns, real-time control robustness and generalizability were validated. The implicit spiking dynamics of the cerebellar model fulfill timing, prediction and learning functions.


Subject(s)
Cerebellum/physiology , Models, Neurological , Robotics , Blinking , Humans , Learning , Neural Networks, Computer
10.
Article in English | MEDLINE | ID: mdl-25177290

ABSTRACT

The cerebellum is known to play a critical role in learning relevant patterns of activity for adaptive motor control, but the underlying network mechanisms are only partly understood. The classical long-term synaptic plasticity between parallel fibers (PFs) and Purkinje cells (PCs), which is driven by the inferior olive (IO), can only account for limited aspects of learning. Recently, the role of additional forms of plasticity in the granular layer, molecular layer and deep cerebellar nuclei (DCN) has been considered. In particular, learning at DCN synapses allows for generalization, but convergence to a stable state requires hundreds of repetitions. In this paper we have explored the putative role of the IO-DCN connection by endowing it with adaptable weights and exploring its implications in a closed-loop robotic manipulation task. Our results show that IO-DCN plasticity accelerates convergence of learning by up to two orders of magnitude without conflicting with the generalization properties conferred by DCN plasticity. Thus, this model suggests that multiple distributed learning mechanisms provide a key for explaining the complex properties of procedural learning and open up new experimental questions for synaptic plasticity in the cerebellar network.

11.
IEEE Trans Neural Netw ; 22(8): 1321-8, 2011 Aug.
Article in English | MEDLINE | ID: mdl-21708499

ABSTRACT

It is widely assumed that the cerebellum is one of the main nervous centers involved in correcting and refining planned movement and accounting for disturbances occurring during movement, for instance, due to the manipulation of objects which affect the kinematics and dynamics of the robot-arm plant model. In this brief, we evaluate a way in which a cerebellar-like structure can store a model in the granular and molecular layers. Furthermore, we study how its microstructure and input representations (context labels and sensorimotor signals) can efficiently support model abstraction toward delivering accurate corrective torque values for increasing precision during different-object manipulation. We also describe how the explicit (object-related input labels) and implicit state input representations (sensorimotor signals) complement each other to better handle different models and allow interpolation between two already stored models. This facilitates accurate corrections during manipulations of new objects taking advantage of already stored models.


Subject(s)
Cerebellum , Movement , Psychomotor Performance , Robotics/methods , Action Potentials/physiology , Cerebellum/physiology , Movement/physiology , Psychomotor Performance/physiology
12.
Biosystems ; 94(1-2): 18-27, 2008.
Article in English | MEDLINE | ID: mdl-18616974

ABSTRACT

We describe a neural network model of the cerebellum based on integrate-and-fire spiking neurons with conductance-based synapses. The neuron characteristics are derived from our earlier detailed models of the different cerebellar neurons. We tested the cerebellum model in a real-time control application with a robotic platform. Delays were introduced in the different sensorimotor pathways according to the biological system. The main plasticity in the cerebellar model is a spike-timing dependent plasticity (STDP) at the parallel fiber to Purkinje cell connections. This STDP is driven by the inferior olive (IO) activity, which encodes an error signal using a novel probabilistic low frequency model. We demonstrate the cerebellar model in a robot control system using a target-reaching task. We test whether the system learns to reach different target positions in a non-destructive way, therefore abstracting a general dynamics model. To test the system's ability to self-adapt to different dynamical situations, we present results obtained after changing the dynamics of the robotic platform significantly (its friction and load). The experimental results show that the cerebellar-based system is able to adapt dynamically to different contexts.


Subject(s)
Artificial Intelligence , Cerebellum/physiology , Models, Biological , Nerve Net , Neurons/physiology , Robotics/methods , Action Potentials/physiology , Computer Simulation , Time Factors
13.
Biosystems ; 94(1-2): 10-7, 2008.
Article in English | MEDLINE | ID: mdl-18616981

ABSTRACT

Around half of the neurons of a human brain are granule cells (approximately 10(11)granule neurons) [Kandel, E.R., Schwartz, J.H., Jessell, T.M., 2000. Principles of Neural Science. McGraw-Hill Professional Publishing, New York]. In order to study in detail the functional role of the intrinsic features of this cell we have developed a pre-compiled behavioural model based on the simplified granule-cell model of Bezzi et al. [Bezzi, M., Nieus, T., Arleo, A., D'Angelo, E., Coenen, O.J.-M.D., 2004. Information transfer at the mossy fiber-granule cell synapse of the cerebellum. 34th Annual Meeting. Society for Neuroscience, San Diego, CA, USA]. We can use an efficient event-driven simulation scheme based on lookup tables (EDLUT) [Ros, E., Carrillo, R.R., Ortigosa, E.M., Barbour, B., Ags, R., 2006. Event-driven simulation scheme for spiking neural networks using lookup tables to characterize neuronal dynamics. Neural Computation 18 (12), 2959-2993]. For this purpose it is necessary to compile into tables the data obtained through a massive numerical calculation of the simplified cell model. This allows network simulations requiring minimal numerical calculation. There are three major features that are considered functionally relevant in the simplified granule cell model: bursting, subthreshold oscillations and resonance. In this work we describe how the cell model is compiled into tables keeping these key properties of the neuron model.


Subject(s)
Cerebellum/cytology , Computational Biology/methods , Models, Biological , Nerve Net , Neurons/physiology , Synaptic Transmission/physiology , Computer Simulation
14.
Biosystems ; 87(2-3): 275-80, 2007 Feb.
Article in English | MEDLINE | ID: mdl-17079071

ABSTRACT

Most neural communication and processing tasks are driven by spikes. This has enabled the application of the event-driven simulation schemes. However the simulation of spiking neural networks based on complex models that cannot be simplified to analytical expressions (requiring numerical calculation) is very time consuming. Here we describe briefly an event-driven simulation scheme that uses pre-calculated table-based neuron characterizations to avoid numerical calculations during a network simulation, allowing the simulation of large-scale neural systems. More concretely we explain how electrical coupling can be simulated efficiently within this computation scheme, reproducing synchronization processes observed in detailed simulations of neural populations.


Subject(s)
Models, Neurological , Nerve Net/physiology , Neural Networks, Computer , Action Potentials , Evoked Potentials , Synaptic Transmission , Systems Biology
SELECTION OF CITATIONS
SEARCH DETAIL
...