Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 27
Filter
Add more filters










Publication year range
1.
PLoS Comput Biol ; 20(1): e1011008, 2024 Jan.
Article in English | MEDLINE | ID: mdl-38166093

ABSTRACT

Complex interactions between brain regions and the spinal cord (SC) govern body motion, which is ultimately driven by muscle activation. Motor planning or learning are mainly conducted at higher brain regions, whilst the SC acts as a brain-muscle gateway and as a motor control centre providing fast reflexes and muscle activity regulation. Thus, higher brain areas need to cope with the SC as an inherent and evolutionary older part of the body dynamics. Here, we address the question of how SC dynamics affects motor learning within the cerebellum; in particular, does the SC facilitate cerebellar motor learning or constitute a biological constraint? We provide an exploratory framework by integrating biologically plausible cerebellar and SC computational models in a musculoskeletal upper limb control loop. The cerebellar model, equipped with the main form of cerebellar plasticity, provides motor adaptation; whilst the SC model implements stretch reflex and reciprocal inhibition between antagonist muscles. The resulting spino-cerebellar model is tested performing a set of upper limb motor tasks, including external perturbation studies. A cerebellar model, lacking the implemented SC model and directly controlling the simulated muscles, was also tested in the same. The performances of the spino-cerebellar and cerebellar models were then compared, thus allowing directly addressing the SC influence on cerebellar motor adaptation and learning, and on handling external motor perturbations. Performance was assessed in both joint and muscle space, and compared with kinematic and EMG recordings from healthy participants. The differences in cerebellar synaptic adaptation between both models were also studied. We conclude that the SC facilitates cerebellar motor learning; when the SC circuits are in the loop, faster convergence in motor learning is achieved with simpler cerebellar synaptic weight distributions. The SC is also found to improve robustness against external perturbations, by better reproducing and modulating muscle cocontraction patterns.


Subject(s)
Cerebellum , Spinal Cord , Humans , Cerebellum/physiology , Spinal Cord/physiology , Computer Simulation , Upper Extremity , Learning/physiology
2.
Front Neurorobot ; 17: 1166911, 2023.
Article in English | MEDLINE | ID: mdl-37396028

ABSTRACT

Collaborative robots, or cobots, are designed to work alongside humans and to alleviate their physical burdens, such as lifting heavy objects or performing tedious tasks. Ensuring the safety of human-robot interaction (HRI) is paramount for effective collaboration. To achieve this, it is essential to have a reliable dynamic model of the cobot that enables the implementation of torque control strategies. These strategies aim to achieve accurate motion while minimizing the amount of torque exerted by the robot. However, modeling the complex non-linear dynamics of cobots with elastic actuators poses a challenge for traditional analytical modeling techniques. Instead, cobot dynamic modeling needs to be learned through data-driven approaches, rather than analytical equation-driven modeling. In this study, we propose and evaluate three machine learning (ML) approaches based on bidirectional recurrent neural networks (BRNNs) for learning the inverse dynamic model of a cobot equipped with elastic actuators. We also provide our ML approaches with a representative training dataset of the cobot's joint positions, velocities, and corresponding torque values. The first ML approach uses a non-parametric configuration, while the other two implement semi-parametric configurations. All three ML approaches outperform the rigid-bodied dynamic model provided by the cobot's manufacturer in terms of torque precision while maintaining their generalization capabilities and real-time operation due to the optimized sample dataset size and network dimensions. Despite the similarity in torque estimation of these three configurations, the non-parametric configuration was specifically designed for worst-case scenarios where the robot dynamics are completely unknown. Finally, we validate the applicability of our ML approaches by integrating the worst-case non-parametric configuration as a controller within a feedforward loop. We verify the accuracy of the learned inverse dynamic model by comparing it to the actual cobot performance. Our non-parametric architecture outperforms the robot's default factory position controller in terms of accuracy.

4.
Neural Netw ; 155: 422-438, 2022 Nov.
Article in English | MEDLINE | ID: mdl-36116334

ABSTRACT

The inferior olivary (IO) nucleus makes up the signal gateway for several organs to the cerebellar cortex. Located within the sensory-motor-cerebellum pathway, the IO axons, i.e., climbing fibres (CFs), massively synapse onto the cerebellar Purkinje cells (PCs) regulating motor learning whilst the olivary nucleus receives negative feedback through the GABAergic nucleo-olivary​ (NO) pathway. The NO pathway regulates the electrical coupling (EC) amongst the olivary cells thus facilitating synchrony and timing. However, the involvement of this EC regulation on cerebellar adaptive behaviour is still under debate. In our study we have used a spiking cerebellar model to assess the role of the NO pathway in regulating vestibulo-ocular-reflex (VOR) adaptation. The model incorporates spike-based synaptic plasticity at multiple cerebellar sites and an electrically-coupled olivary system. The olivary system plays a central role in regulating the CF spike-firing patterns that drive the PCs, whose axons ultimately shape the cerebellar output. Our results suggest that a systematic GABAergic NO deactivation decreases the spatio-temporal complexity of the IO firing patterns thereby worsening the temporal resolution of the olivary system. Conversely, properly coded IO spatio-temporal firing patterns, thanks to NO modulation, finely shape the balance between long-term depression and potentiation, which optimises VOR adaptation. Significantly, the NO connectivity pattern constrained to the same micro-zone helps maintain the spatio-temporal complexity of the IO firing patterns through time. Moreover, the temporal alignment between the latencies found in the NO fibres and the sensory-motor pathway delay appears to be crucial for facilitating the VOR. When we consider all the above points we believe that these results predict that the NO pathway is instrumental in modulating the olivary coupling and relevant to VOR adaptation.


Subject(s)
Olivary Nucleus , Purkinje Cells , Action Potentials/physiology , Olivary Nucleus/physiology , Purkinje Cells/physiology , Cerebellum/physiology , Synapses/physiology
5.
Neural Netw ; 146: 316-333, 2022 Feb.
Article in English | MEDLINE | ID: mdl-34923219

ABSTRACT

The vestibulo-ocular reflex (VOR) stabilizes vision during head motion. Age-related changes of vestibular neuroanatomical properties predict a linear decay of VOR function. Nonetheless, human epidemiological data show a stable VOR function across the life span. In this study, we model cerebellum-dependent VOR adaptation to relate structural and functional changes throughout aging. We consider three neurosynaptic factors that may codetermine VOR adaptation during aging: the electrical coupling of inferior olive neurons, the long-term spike timing-dependent plasticity at parallel fiber - Purkinje cell synapses and mossy fiber - medial vestibular nuclei synapses, and the intrinsic plasticity of Purkinje cell synapses Our cross-sectional aging analyses suggest that long-term plasticity acts as a global homeostatic mechanism that underpins the stable temporal profile of VOR function. The results also suggest that the intrinsic plasticity of Purkinje cell synapses operates as a local homeostatic mechanism that further sustains the VOR at older ages. Importantly, the computational epidemiology approach presented in this study allows discrepancies among human cross-sectional studies to be understood in terms of interindividual variability in older individuals. Finally, our longitudinal aging simulations show that the amount of residual fibers coding for the peak and trough of the VOR cycle constitutes a predictive hallmark of VOR trajectories over a lifetime.


Subject(s)
Adaptation, Physiological , Reflex, Vestibulo-Ocular , Aged , Aging , Cerebellum , Cross-Sectional Studies , Humans , Middle Aged , Purkinje Cells
6.
Sci Robot ; 6(58): eabf2756, 2021 Sep 08.
Article in English | MEDLINE | ID: mdl-34516748

ABSTRACT

The presence of computation and transmission-variable time delays within a robotic control loop is a major cause of instability, hindering safe human-robot interaction (HRI) under these circumstances. Classical control theory has been adapted to counteract the presence of such variable delays; however, the solutions provided to date cannot cope with HRI robotics inherent features. The highly nonlinear dynamics of HRI cobots (robots intended for human interaction in collaborative tasks), together with the growing use of flexible joints and elastic materials providing passive compliance, prevent traditional control solutions from being applied. Conversely, human motor control natively deals with low power actuators, nonlinear dynamics, and variable transmission time delays. The cerebellum, pivotal to human motor control, is able to predict motor commands by correlating current and past sensorimotor signals, and to ultimately compensate for the existing sensorimotor human delay (tens of milliseconds). This work aims at bridging those inherent features of cerebellar motor control and current robotic challenges­namely, compliant control in the presence of variable sensorimotor delays. We implement a cerebellar-like spiking neural network (SNN) controller that is adaptive, compliant, and robust to variable sensorimotor delays by replicating the cerebellar mechanisms that embrace the presence of biological delays and allow motor learning and adaptation.


Subject(s)
Cerebellum/physiology , Robotics , Adaptation, Physiological , Equipment Design , Internet , Learning , Man-Machine Systems , Models, Neurological , Motor Skills , Movement , Neural Networks, Computer , Nonlinear Dynamics , Spain , Torque , User-Computer Interface
7.
IEEE Trans Cybern ; 51(5): 2476-2489, 2021 May.
Article in English | MEDLINE | ID: mdl-31647453

ABSTRACT

The work presented here is a novel biological approach for the compliant control of a robotic arm in real time (RT). We integrate a spiking cerebellar network at the core of a feedback control loop performing torque-driven control. The spiking cerebellar controller provides torque commands allowing for accurate and coordinated arm movements. To compute these output motor commands, the spiking cerebellar controller receives the robot's sensorial signals, the robot's goal behavior, and an instructive signal. These input signals are translated into a set of evolving spiking patterns representing univocally a specific system state at every point of time. Spike-timing-dependent plasticity (STDP) is then supported, allowing for building adaptive control. The spiking cerebellar controller continuously adapts the torque commands provided to the robot from experience as STDP is deployed. Adaptive torque commands, in turn, help the spiking cerebellar controller to cope with built-in elastic elements within the robot's actuators mimicking human muscles (inherently elastic). We propose a natural integration of a bioinspired control scheme, based on the cerebellum, with a compliant robot. We prove that our compliant approach outperforms the accuracy of the default factory-installed position control in a set of tasks used for addressing cerebellar motor behavior: controlling six degrees of freedom (DoF) in smooth movements, fast ballistic movements, and unstructured scenario compliant movements.


Subject(s)
Brain-Computer Interfaces , Cerebellum/physiology , Models, Neurological , Neuronal Plasticity/physiology , Robotics , Action Potentials/physiology , Humans , Movement , Upper Extremity/physiology
8.
IEEE Trans Cybern ; 2019 Feb 27.
Article in English | MEDLINE | ID: mdl-30835236

ABSTRACT

We embed a spiking cerebellar model within an adaptive real-time (RT) control loop that is able to operate a real robotic body (iCub) when performing different vestibulo-ocular reflex (VOR) tasks. The spiking neural network computation, including event- and time-driven neural dynamics, neural activity, and spike-timing dependent plasticity (STDP) mechanisms, leads to a nondeterministic computation time caused by the neural activity volleys encountered during cerebellar simulation. This nondeterministic computation time motivates the integration of an RT supervisor module that is able to ensure a well-orchestrated neural computation time and robot operation. Actually, our neurorobotic experimental setup (VOR) benefits from the biological sensory motor delay between the cerebellum and the body to buffer the computational overloads as well as providing flexibility in adjusting the neural computation time and RT operation. The RT supervisor module provides for incremental countermeasures that dynamically slow down or speed up the cerebellar simulation by either halting the simulation or disabling certain neural computation features (i.e., STDP mechanisms, spike propagation, and neural updates) to cope with the RT constraints imposed by the real robot operation. This neurorobotic experimental setup is applied to different horizontal and vertical VOR adaptive tasks that are widely used by the neuroscientific community to address cerebellar functioning. We aim to elucidate the manner in which the combination of the cerebellar neural substrate and the distributed plasticity shapes the cerebellar neural activity to mediate motor adaptation. This paper underlies the need for a two-stage learning process to facilitate VOR acquisition.

9.
PLoS Comput Biol ; 15(3): e1006298, 2019 03.
Article in English | MEDLINE | ID: mdl-30860991

ABSTRACT

Cerebellar Purkinje cells mediate accurate eye movement coordination. However, it remains unclear how oculomotor adaptation depends on the interplay between the characteristic Purkinje cell response patterns, namely tonic, bursting, and spike pauses. Here, a spiking cerebellar model assesses the role of Purkinje cell firing patterns in vestibular ocular reflex (VOR) adaptation. The model captures the cerebellar microcircuit properties and it incorporates spike-based synaptic plasticity at multiple cerebellar sites. A detailed Purkinje cell model reproduces the three spike-firing patterns that are shown to regulate the cerebellar output. Our results suggest that pauses following Purkinje complex spikes (bursts) encode transient disinhibition of target medial vestibular nuclei, critically gating the vestibular signals conveyed by mossy fibres. This gating mechanism accounts for early and coarse VOR acquisition, prior to the late reflex consolidation. In addition, properly timed and sized Purkinje cell bursts allow the ratio between long-term depression and potentiation (LTD/LTP) to be finely shaped at mossy fibre-medial vestibular nuclei synapses, which optimises VOR consolidation. Tonic Purkinje cell firing maintains the consolidated VOR through time. Importantly, pauses are crucial to facilitate VOR phase-reversal learning, by reshaping previously learnt synaptic weight distributions. Altogether, these results predict that Purkinje spike burst-pause dynamics are instrumental to VOR learning and reversal adaptation.


Subject(s)
Action Potentials , Adaptation, Physiological , Purkinje Cells/physiology , Animals , Eye Movements , Humans , Learning , Long-Term Potentiation , Reflex, Vestibulo-Ocular/physiology , Synapses/physiology
10.
Front Neuroinform ; 12: 24, 2018.
Article in English | MEDLINE | ID: mdl-29755335

ABSTRACT

[This corrects the article on p. 7 in vol. 11, PMID: 28223930.].

11.
Front Neurosci ; 12: 913, 2018.
Article in English | MEDLINE | ID: mdl-30618549

ABSTRACT

Supervised learning has long been attributed to several feed-forward neural circuits within the brain, with particular attention being paid to the cerebellar granular layer. The focus of this study is to evaluate the input activity representation of these feed-forward neural networks. The activity of cerebellar granule cells is conveyed by parallel fibers and translated into Purkinje cell activity, which constitutes the sole output of the cerebellar cortex. The learning process at this parallel-fiber-to-Purkinje-cell connection makes each Purkinje cell sensitive to a set of specific cerebellar states, which are roughly determined by the granule-cell activity during a certain time window. A Purkinje cell becomes sensitive to each neural input state and, consequently, the network operates as a function able to generate a desired output for each provided input by means of supervised learning. However, not all sets of Purkinje cell responses can be assigned to any set of input states due to the network's own limitations (inherent to the network neurobiological substrate), that is, not all input-output mapping can be learned. A key limiting factor is the representation of the input states through granule-cell activity. The quality of this representation (e.g., in terms of heterogeneity) will determine the capacity of the network to learn a varied set of outputs. Assessing the quality of this representation is interesting when developing and studying models of these networks to identify those neuron or network characteristics that enhance this representation. In this study we present an algorithm for evaluating quantitatively the level of compatibility/interference amongst a set of given cerebellar states according to their representation (granule-cell activation patterns) without the need for actually conducting simulations and network training. The algorithm input consists of a real-number matrix that codifies the activity level of every considered granule-cell in each state. The capability of this representation to generate a varied set of outputs is evaluated geometrically, thus resulting in a real number that assesses the goodness of the representation.

12.
Front Neuroinform ; 11: 7, 2017.
Article in English | MEDLINE | ID: mdl-28223930

ABSTRACT

Modeling and simulating the neural structures which make up our central neural system is instrumental for deciphering the computational neural cues beneath. Higher levels of biological plausibility usually impose higher levels of complexity in mathematical modeling, from neural to behavioral levels. This paper focuses on overcoming the simulation problems (accuracy and performance) derived from using higher levels of mathematical complexity at a neural level. This study proposes different techniques for simulating neural models that hold incremental levels of mathematical complexity: leaky integrate-and-fire (LIF), adaptive exponential integrate-and-fire (AdEx), and Hodgkin-Huxley (HH) neural models (ranged from low to high neural complexity). The studied techniques are classified into two main families depending on how the neural-model dynamic evaluation is computed: the event-driven or the time-driven families. Whilst event-driven techniques pre-compile and store the neural dynamics within look-up tables, time-driven techniques compute the neural dynamics iteratively during the simulation time. We propose two modifications for the event-driven family: a look-up table recombination to better cope with the incremental neural complexity together with a better handling of the synchronous input activity. Regarding the time-driven family, we propose a modification in computing the neural dynamics: the bi-fixed-step integration method. This method automatically adjusts the simulation step size to better cope with the stiffness of the neural model dynamics running in CPU platforms. One version of this method is also implemented for hybrid CPU-GPU platforms. Finally, we analyze how the performance and accuracy of these modifications evolve with increasing levels of neural complexity. We also demonstrate how the proposed modifications which constitute the main contribution of this study systematically outperform the traditional event- and time-driven techniques under increasing levels of neural complexity.

13.
Front Cell Neurosci ; 10: 176, 2016.
Article in English | MEDLINE | ID: mdl-27458345

ABSTRACT

The cerebellar microcircuit has been the work bench for theoretical and computational modeling since the beginning of neuroscientific research. The regular neural architecture of the cerebellum inspired different solutions to the long-standing issue of how its circuitry could control motor learning and coordination. Originally, the cerebellar network was modeled using a statistical-topological approach that was later extended by considering the geometrical organization of local microcircuits. However, with the advancement in anatomical and physiological investigations, new discoveries have revealed an unexpected richness of connections, neuronal dynamics and plasticity, calling for a change in modeling strategies, so as to include the multitude of elementary aspects of the network into an integrated and easily updatable computational framework. Recently, biophysically accurate "realistic" models using a bottom-up strategy accounted for both detailed connectivity and neuronal non-linear membrane dynamics. In this perspective review, we will consider the state of the art and discuss how these initial efforts could be further improved. Moreover, we will consider how embodied neurorobotic models including spiking cerebellar networks could help explaining the role and interplay of distributed forms of plasticity. We envisage that realistic modeling, combined with closed-loop simulations, will help to capture the essence of cerebellar computations and could eventually be applied to neurological diseases and neurorobotic control systems.

14.
Int J Neural Syst ; 26(5): 1650020, 2016 Aug.
Article in English | MEDLINE | ID: mdl-27079422

ABSTRACT

The majority of operations carried out by the brain require learning complex signal patterns for future recognition, retrieval and reuse. Although learning is thought to depend on multiple forms of long-term synaptic plasticity, the way this latter contributes to pattern recognition is still poorly understood. Here, we have used a simple model of afferent excitatory neurons and interneurons with lateral inhibition, reproducing a network topology found in many brain areas from the cerebellum to cortical columns. When endowed with spike-timing dependent plasticity (STDP) at the excitatory input synapses and at the inhibitory interneuron-interneuron synapses, the interneurons rapidly learned complex input patterns. Interestingly, induction of plasticity required that the network be entrained into theta-frequency band oscillations, setting the internal phase-reference required to drive STDP. Inhibitory plasticity effectively distributed multiple patterns among available interneurons, thus allowing the simultaneous detection of multiple overlapping patterns. The addition of plasticity in intrinsic excitability made the system more robust allowing self-adjustment and rescaling in response to a broad range of input patterns. The combination of plasticity in lateral inhibitory connections and homeostatic mechanisms in the inhibitory interneurons optimized mutual information (MI) transfer. The storage of multiple complex patterns in plastic interneuron networks could be critical for the generation of sparse representations of information in excitatory neuron populations falling under their control.


Subject(s)
Action Potentials/physiology , Interneurons/physiology , Models, Neurological , Neural Inhibition/physiology , Neuronal Plasticity/physiology , Periodicity , Algorithms , Animals , Brain/physiology , Homeostasis/physiology , Information Theory , Learning/physiology , Pattern Recognition, Automated , Pattern Recognition, Physiological , Synapses/physiology , Theta Rhythm/physiology
15.
Front Comput Neurosci ; 10: 17, 2016.
Article in English | MEDLINE | ID: mdl-26973504

ABSTRACT

Deep cerebellar nuclei neurons receive both inhibitory (GABAergic) synaptic currents from Purkinje cells (within the cerebellar cortex) and excitatory (glutamatergic) synaptic currents from mossy fibers. Those two deep cerebellar nucleus inputs are thought to be also adaptive, embedding interesting properties in the framework of accurate movements. We show that distributed spike-timing-dependent plasticity mechanisms (STDP) located at different cerebellar sites (parallel fibers to Purkinje cells, mossy fibers to deep cerebellar nucleus cells, and Purkinje cells to deep cerebellar nucleus cells) in close-loop simulations provide an explanation for the complex learning properties of the cerebellum in motor learning. Concretely, we propose a new mechanistic cerebellar spiking model. In this new model, deep cerebellar nuclei embed a dual functionality: deep cerebellar nuclei acting as a gain adaptation mechanism and as a facilitator for the slow memory consolidation at mossy fibers to deep cerebellar nucleus synapses. Equipping the cerebellum with excitatory (e-STDP) and inhibitory (i-STDP) mechanisms at deep cerebellar nuclei afferents allows the accommodation of synaptic memories that were formed at parallel fibers to Purkinje cells synapses and then transferred to mossy fibers to deep cerebellar nucleus synapses. These adaptive mechanisms also contribute to modulate the deep-cerebellar-nucleus-output firing rate (output gain modulation toward optimizing its working range).

16.
IEEE Trans Biomed Eng ; 63(1): 210-9, 2016 Jan.
Article in English | MEDLINE | ID: mdl-26441441

ABSTRACT

GOAL: In this study, we defined a realistic cerebellar model through the use of artificial spiking neural networks, testing it in computational simulations that reproduce associative motor tasks in multiple sessions of acquisition and extinction. METHODS: By evolutionary algorithms, we tuned the cerebellar microcircuit to find out the near-optimal plasticity mechanism parameters that better reproduced human-like behavior in eye blink classical conditioning, one of the most extensively studied paradigms related to the cerebellum. We used two models: one with only the cortical plasticity and another including two additional plasticity sites at nuclear level. RESULTS: First, both spiking cerebellar models were able to well reproduce the real human behaviors, in terms of both "timing" and "amplitude", expressing rapid acquisition, stable late acquisition, rapid extinction, and faster reacquisition of an associative motor task. Even though the model with only the cortical plasticity site showed good learning capabilities, the model with distributed plasticity produced faster and more stable acquisition of conditioned responses in the reacquisition phase. This behavior is explained by the effect of the nuclear plasticities, which have slow dynamics and can express memory consolidation and saving. CONCLUSIONS: We showed how the spiking dynamics of multiple interactive neural mechanisms implicitly drive multiple essential components of complex learning processes.  SIGNIFICANCE: This study presents a very advanced computational model, developed together by biomedical engineers, computer scientists, and neuroscientists. Since its realistic features, the proposed model can provide confirmations and suggestions about neurophysiological and pathological hypotheses and can be used in challenging clinical applications.


Subject(s)
Action Potentials/physiology , Blinking/physiology , Cerebellum/physiology , Models, Neurological , Neural Networks, Computer , Neuronal Plasticity/physiology , Algorithms , Computer Simulation , Humans
17.
Cerebellum ; 15(2): 139-51, 2016 Apr.
Article in English | MEDLINE | ID: mdl-26304953

ABSTRACT

The cerebellum is involved in learning and memory of sensory motor skills. However, the way this process takes place in local microcircuits is still unclear. The initial proposal, casted into the Motor Learning Theory, suggested that learning had to occur at the parallel fiber-Purkinje cell synapse under supervision of climbing fibers. However, the uniqueness of this mechanism has been questioned, and multiple forms of long-term plasticity have been revealed at various locations in the cerebellar circuit, including synapses and neurons in the granular layer, molecular layer and deep-cerebellar nuclei. At present, more than 15 forms of plasticity have been reported. There has been a long debate on which plasticity is more relevant to specific aspects of learning, but this question turned out to be hard to answer using physiological analysis alone. Recent experiments and models making use of closed-loop robotic simulations are revealing a radically new view: one single form of plasticity is insufficient, while altogether, the different forms of plasticity can explain the multiplicity of properties characterizing cerebellar learning. These include multi-rate acquisition and extinction, reversibility, self-scalability, and generalization. Moreover, when the circuit embeds multiple forms of plasticity, it can easily cope with multiple behaviors endowing therefore the cerebellum with the properties needed to operate as an effective generalized forward controller.


Subject(s)
Cerebellum/physiology , Learning/physiology , Neuronal Plasticity/physiology , Neurons/physiology , Synapses/physiology , Animals , Humans , Nerve Fibers/physiology
18.
IEEE Trans Neural Netw Learn Syst ; 26(7): 1567-74, 2015 Jul.
Article in English | MEDLINE | ID: mdl-25167556

ABSTRACT

Time-driven simulation methods in traditional CPU architectures perform well and precisely when simulating small-scale spiking neural networks. Nevertheless, they still have drawbacks when simulating large-scale systems. Conversely, event-driven simulation methods in CPUs and time-driven simulation methods in graphic processing units (GPUs) can outperform CPU time-driven methods under certain conditions. With this performance improvement in mind, we have developed an event-and-time-driven spiking neural network simulator suitable for a hybrid CPU-GPU platform. Our neural simulator is able to efficiently simulate bio-inspired spiking neural networks consisting of different neural models, which can be distributed heterogeneously in both small layers and large layers or subsystems. For the sake of efficiency, the low-activity parts of the neural network can be simulated in CPU using event-driven methods while the high-activity subsystems can be simulated in either CPU (a few neurons) or GPU (thousands or millions of neurons) using time-driven methods. In this brief, we have undertaken a comparative study of these different simulation methods. For benchmarking the different simulation methods and platforms, we have used a cerebellar-inspired neural-network model consisting of a very dense granular layer and a Purkinje layer with a smaller number of cells (according to biological ratios). Thus, this cerebellar-like network includes a dense diverging neural layer (increasing the dimensionality of its internal representation and sparse coding) and a converging neural layer (integration) similar to many other biologically inspired and also artificial neural networks.


Subject(s)
Computer Simulation , Neural Networks, Computer , Algorithms , Benchmarking , Cerebellum/cytology , Cerebellum/physiology , Computer Graphics , Computers, Hybrid , Microcomputers , Nerve Fibers/physiology , Purkinje Cells/physiology , Reproducibility of Results
19.
PLoS One ; 9(11): e112265, 2014.
Article in English | MEDLINE | ID: mdl-25390365

ABSTRACT

The cerebellum is involved in a large number of different neural processes, especially in associative learning and in fine motor control. To develop a comprehensive theory of sensorimotor learning and control, it is crucial to determine the neural basis of coding and plasticity embedded into the cerebellar neural circuit and how they are translated into behavioral outcomes in learning paradigms. Learning has to be inferred from the interaction of an embodied system with its real environment, and the same cerebellar principles derived from cell physiology have to be able to drive a variety of tasks of different nature, calling for complex timing and movement patterns. We have coupled a realistic cerebellar spiking neural network (SNN) with a real robot and challenged it in multiple diverse sensorimotor tasks. Encoding and decoding strategies based on neuronal firing rates were applied. Adaptive motor control protocols with acquisition and extinction phases have been designed and tested, including an associative Pavlovian task (Eye blinking classical conditioning), a vestibulo-ocular task and a perturbed arm reaching task operating in closed-loop. The SNN processed in real-time mossy fiber inputs as arbitrary contextual signals, irrespective of whether they conveyed a tone, a vestibular stimulus or the position of a limb. A bidirectional long-term plasticity rule implemented at parallel fibers-Purkinje cell synapses modulated the output activity in the deep cerebellar nuclei. In all tasks, the neurorobot learned to adjust timing and gain of the motor responses by tuning its output discharge. It succeeded in reproducing how human biological systems acquire, extinguish and express knowledge of a noisy and changing world. By varying stimuli and perturbations patterns, real-time control robustness and generalizability were validated. The implicit spiking dynamics of the cerebellar model fulfill timing, prediction and learning functions.


Subject(s)
Cerebellum/physiology , Models, Neurological , Robotics , Blinking , Humans , Learning , Neural Networks, Computer
20.
Article in English | MEDLINE | ID: mdl-25177290

ABSTRACT

The cerebellum is known to play a critical role in learning relevant patterns of activity for adaptive motor control, but the underlying network mechanisms are only partly understood. The classical long-term synaptic plasticity between parallel fibers (PFs) and Purkinje cells (PCs), which is driven by the inferior olive (IO), can only account for limited aspects of learning. Recently, the role of additional forms of plasticity in the granular layer, molecular layer and deep cerebellar nuclei (DCN) has been considered. In particular, learning at DCN synapses allows for generalization, but convergence to a stable state requires hundreds of repetitions. In this paper we have explored the putative role of the IO-DCN connection by endowing it with adaptable weights and exploring its implications in a closed-loop robotic manipulation task. Our results show that IO-DCN plasticity accelerates convergence of learning by up to two orders of magnitude without conflicting with the generalization properties conferred by DCN plasticity. Thus, this model suggests that multiple distributed learning mechanisms provide a key for explaining the complex properties of procedural learning and open up new experimental questions for synaptic plasticity in the cerebellar network.

SELECTION OF CITATIONS
SEARCH DETAIL
...