Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 14 de 14
Filter
Add more filters










Publication year range
1.
Sci Rep ; 14(1): 14656, 2024 06 25.
Article in English | MEDLINE | ID: mdl-38918553

ABSTRACT

Humans and animals can learn new skills after practicing for a few hours, while current reinforcement learning algorithms require a large amount of data to achieve good performances. Recent model-based approaches show promising results by reducing the number of necessary interactions with the environment to learn a desirable policy. However, these methods require biological implausible ingredients, such as the detailed storage of older experiences, and long periods of offline learning. The optimal way to learn and exploit world-models is still an open question. Taking inspiration from biology, we suggest that dreaming might be an efficient expedient to use an inner model. We propose a two-module (agent and model) spiking neural network in which "dreaming" (living new experiences in a model-based simulated environment) significantly boosts learning. Importantly, our model does not require the detailed storage of experiences, and learns online the world-model and the policy. Moreover, we stress that our network is composed of spiking neurons, further increasing the biological plausibility and implementability in neuromorphic hardware.


Subject(s)
Models, Neurological , Neural Networks, Computer , Reinforcement, Psychology , Humans , Action Potentials/physiology , Neurons/physiology , Algorithms , Learning/physiology , Nerve Net/physiology , Animals , Computer Simulation
2.
Cell Rep Methods ; 4(1): 100681, 2024 Jan 22.
Article in English | MEDLINE | ID: mdl-38183979

ABSTRACT

Neuroscience is moving toward a more integrative discipline where understanding brain function requires consolidating the accumulated evidence seen across experiments, species, and measurement techniques. A remaining challenge on that path is integrating such heterogeneous data into analysis workflows such that consistent and comparable conclusions can be distilled as an experimental basis for models and theories. Here, we propose a solution in the context of slow-wave activity (<1 Hz), which occurs during unconscious brain states like sleep and general anesthesia and is observed across diverse experimental approaches. We address the issue of integrating and comparing heterogeneous data by conceptualizing a general pipeline design that is adaptable to a variety of inputs and applications. Furthermore, we present the Collaborative Brain Wave Analysis Pipeline (Cobrawap) as a concrete, reusable software implementation to perform broad, detailed, and rigorous comparisons of slow-wave characteristics across multiple, openly available electrocorticography (ECoG) and calcium imaging datasets.


Subject(s)
Brain Waves , Software , Brain , Sleep , Brain Mapping/methods
3.
Proc Natl Acad Sci U S A ; 120(49): e2220743120, 2023 Dec 05.
Article in English | MEDLINE | ID: mdl-38019856

ABSTRACT

The brain can efficiently learn a wide range of tasks, motivating the search for biologically inspired learning rules for improving current artificial intelligence technology. Most biological models are composed of point neurons and cannot achieve state-of-the-art performance in machine learning. Recent works have proposed that input segregation (neurons receive sensory information and higher-order feedback in segregated compartments), and nonlinear dendritic computation would support error backpropagation in biological neurons. However, these approaches require propagating errors with a fine spatiotemporal structure to all the neurons, which is unlikely to be feasible in a biological network. To relax this assumption, we suggest that bursts and dendritic input segregation provide a natural support for target-based learning, which propagates targets rather than errors. A coincidence mechanism between the basal and the apical compartments allows for generating high-frequency bursts of spikes. This architecture supports a burst-dependent learning rule, based on the comparison between the target bursting activity triggered by the teaching signal and the one caused by the recurrent connections, providing support for target-based learning. We show that this framework can be used to efficiently solve spatiotemporal tasks, such as context-dependent store and recall of three-dimensional trajectories, and navigation tasks. Finally, we suggest that this neuronal architecture naturally allows for orchestrating "hierarchical imitation learning", enabling the decomposition of challenging long-horizon decision-making tasks into simpler subtasks. We show a possible implementation of this in a two-level network, where the high network produces the contextual signal for the low network.


Subject(s)
Artificial Intelligence , Neurons , Neurons/physiology , Brain/physiology , Machine Learning , Models, Neurological , Action Potentials/physiology
4.
Commun Biol ; 6(1): 266, 2023 03 13.
Article in English | MEDLINE | ID: mdl-36914748

ABSTRACT

The development of novel techniques to record wide-field brain activity enables estimation of data-driven models from thousands of recording channels and hence across large regions of cortex. These in turn improve our understanding of the modulation of brain states and the richness of traveling waves dynamics. Here, we infer data-driven models from high-resolution in-vivo recordings of mouse brain obtained from wide-field calcium imaging. We then assimilate experimental and simulated data through the characterization of the spatio-temporal features of cortical waves in experimental recordings. Inference is built in two steps: an inner loop that optimizes a mean-field model by likelihood maximization, and an outer loop that optimizes a periodic neuro-modulation via direct comparison of observables that characterize cortical slow waves. The model reproduces most of the features of the non-stationary and non-linear dynamics present in the high-resolution in-vivo recordings of the mouse brain. The proposed approach offers new methods of characterizing and understanding cortical waves for experimental and computational neuroscientists.


Subject(s)
Brain Waves , Electroencephalography , Animals , Mice , Electroencephalography/methods , Brain , Models, Neurological , Computer Simulation
5.
Front Integr Neurosci ; 16: 972055, 2022.
Article in English | MEDLINE | ID: mdl-36262372

ABSTRACT

Working Memory (WM) is a cognitive mechanism that enables temporary holding and manipulation of information in the human brain. This mechanism is mainly characterized by a neuronal activity during which neuron populations are able to maintain an enhanced spiking activity after being triggered by a short external cue. In this study, we implement, using the NEST simulator, a spiking neural network model in which the WM activity is sustained by a mechanism of short-term synaptic facilitation related to presynaptic calcium kinetics. The model, which is characterized by leaky integrate-and-fire neurons with exponential postsynaptic currents, is able to autonomously show an activity regime in which the memory information can be stored in a synaptic form as a result of synaptic facilitation, with spiking activity functional to facilitation maintenance. The network is able to simultaneously keep multiple memories by showing an alternated synchronous activity which preserves the synaptic facilitation within the neuron populations holding memory information. The results shown in this study confirm that a WM mechanism can be sustained by synaptic facilitation.

6.
Front Neuroinform ; 16: 883333, 2022.
Article in English | MEDLINE | ID: mdl-35859800

ABSTRACT

Spiking neural network models are increasingly establishing themselves as an effective tool for simulating the dynamics of neuronal populations and for understanding the relationship between these dynamics and brain function. Furthermore, the continuous development of parallel computing technologies and the growing availability of computational resources are leading to an era of large-scale simulations capable of describing regions of the brain of ever larger dimensions at increasing detail. Recently, the possibility to use MPI-based parallel codes on GPU-equipped clusters to run such complex simulations has emerged, opening up novel paths to further speed-ups. NEST GPU is a GPU library written in CUDA-C/C++ for large-scale simulations of spiking neural networks, which was recently extended with a novel algorithm for remote spike communication through MPI on a GPU cluster. In this work we evaluate its performance on the simulation of a multi-area model of macaque vision-related cortex, made up of about 4 million neurons and 24 billion synapses and representing 32 mm2 surface area of the macaque cortex. The outcome of the simulations is compared against that obtained using the well-known CPU-based spiking neural network simulator NEST on a high-performance computing cluster. The results show not only an optimal match with the NEST statistical measures of the neural activity in terms of three informative distributions, but also remarkable achievements in terms of simulation time per second of biological activity. Indeed, NEST GPU was able to simulate a second of biological time of the full-scale macaque cortex model in its metastable state 3.1× faster than NEST using 32 compute nodes equipped with an NVIDIA V100 GPU each. Using the same configuration, the ground state of the full-scale macaque cortex model was simulated 2.4× faster than NEST.

7.
PLoS Comput Biol ; 18(6): e1010221, 2022 06.
Article in English | MEDLINE | ID: mdl-35727852

ABSTRACT

The field of recurrent neural networks is over-populated by a variety of proposed learning rules and protocols. The scope of this work is to define a generalized framework, to move a step forward towards the unification of this fragmented scenario. In the field of supervised learning, two opposite approaches stand out, error-based and target-based. This duality gave rise to a scientific debate on which learning framework is the most likely to be implemented in biological networks of neurons. Moreover, the existence of spikes raises the question of whether the coding of information is rate-based or spike-based. To face these questions, we proposed a learning model with two main parameters, the rank of the feedback learning matrix [Formula: see text] and the tolerance to spike timing τ⋆. We demonstrate that a low (high) rank [Formula: see text] accounts for an error-based (target-based) learning rule, while high (low) tolerance to spike timing promotes rate-based (spike-based) coding. We show that in a store and recall task, high-ranks allow for lower MSE values, while low-ranks enable a faster convergence. Our framework naturally lends itself to Behavioral Cloning and allows for efficiently solving relevant closed-loop tasks, investigating what parameters [Formula: see text] are optimal to solve a specific task. We found that a high [Formula: see text] is essential for tasks that require retaining memory for a long time (Button and Food). On the other hand, this is not relevant for a motor task (the 2D Bipedal Walker). In this case, we find that precise spike-based coding enables optimal performances. Finally, we show that our theoretical formulation allows for defining protocols to estimate the rank of the feedback error in biological networks. We release a PyTorch implementation of our model supporting GPU parallelization.


Subject(s)
Models, Neurological , Neural Networks, Computer , Action Potentials/physiology , Neurons/physiology
8.
PLoS Comput Biol ; 17(6): e1009045, 2021 06.
Article in English | MEDLINE | ID: mdl-34181642

ABSTRACT

The brain exhibits capabilities of fast incremental learning from few noisy examples, as well as the ability to associate similar memories in autonomously-created categories and to combine contextual hints with sensory perceptions. Together with sleep, these mechanisms are thought to be key components of many high-level cognitive functions. Yet, little is known about the underlying processes and the specific roles of different brain states. In this work, we exploited the combination of context and perception in a thalamo-cortical model based on a soft winner-take-all circuit of excitatory and inhibitory spiking neurons. After calibrating this model to express awake and deep-sleep states with features comparable with biological measures, we demonstrate the model capability of fast incremental learning from few examples, its resilience when proposed with noisy perceptions and contextual signals, and an improvement in visual classification after sleep due to induced synaptic homeostasis and association of similar memories.


Subject(s)
Action Potentials , Cerebral Cortex/physiology , Models, Neurological , Sleep, REM/physiology , Thalamus/physiology , Algorithms , Cerebral Cortex/cytology , Homeostasis , Humans , Learning , Neurons/physiology , Synapses/physiology , Thalamus/cytology
9.
Front Comput Neurosci ; 15: 627620, 2021.
Article in English | MEDLINE | ID: mdl-33679358

ABSTRACT

Over the past decade there has been a growing interest in the development of parallel hardware systems for simulating large-scale networks of spiking neurons. Compared to other highly-parallel systems, GPU-accelerated solutions have the advantage of a relatively low cost and a great versatility, thanks also to the possibility of using the CUDA-C/C++ programming languages. NeuronGPU is a GPU library for large-scale simulations of spiking neural network models, written in the C++ and CUDA-C++ programming languages, based on a novel spike-delivery algorithm. This library includes simple LIF (leaky-integrate-and-fire) neuron models as well as several multisynapse AdEx (adaptive-exponential-integrate-and-fire) neuron models with current or conductance based synapses, different types of spike generators, tools for recording spikes, state variables and parameters, and it supports user-definable models. The numerical solution of the differential equations of the dynamics of the AdEx models is performed through a parallel implementation, written in CUDA-C++, of the fifth-order Runge-Kutta method with adaptive step-size control. In this work we evaluate the performance of this library on the simulation of a cortical microcircuit model, based on LIF neurons and current-based synapses, and on balanced networks of excitatory and inhibitory neurons, using AdEx or Izhikevich neuron models and conductance-based or current-based synapses. On these models, we will show that the proposed library achieves state-of-the-art performance in terms of simulation time per second of biological activity. In particular, using a single NVIDIA GeForce RTX 2080 Ti GPU board, the full-scale cortical-microcircuit model, which includes about 77,000 neurons and 3 · 108 connections, can be simulated at a speed very close to real time, while the simulation time of a balanced network of 1,000,000 AdEx neurons with 1,000 connections per neuron was about 70 s per second of biological activity.

10.
PLoS One ; 16(2): e0247014, 2021.
Article in English | MEDLINE | ID: mdl-33592040

ABSTRACT

Recurrent spiking neural networks (RSNN) in the brain learn to perform a wide range of perceptual, cognitive and motor tasks very efficiently in terms of energy consumption and their training requires very few examples. This motivates the search for biologically inspired learning rules for RSNNs, aiming to improve our understanding of brain computation and the efficiency of artificial intelligence. Several spiking models and learning rules have been proposed, but it remains a challenge to design RSNNs whose learning relies on biologically plausible mechanisms and are capable of solving complex temporal tasks. In this paper, we derive a learning rule, local to the synapse, from a simple mathematical principle, the maximization of the likelihood for the network to solve a specific task. We propose a novel target-based learning scheme in which the learning rule derived from likelihood maximization is used to mimic a specific spatio-temporal spike pattern that encodes the solution to complex temporal tasks. This method makes the learning extremely rapid and precise, outperforming state of the art algorithms for RSNNs. While error-based approaches, (e.g. e-prop) trial after trial optimize the internal sequence of spikes in order to progressively minimize the MSE we assume that a signal randomly projected from an external origin (e.g. from other brain areas) directly defines the target sequence. This facilitates the learning procedure since the network is trained from the beginning to reproduce the desired internal sequence. We propose two versions of our learning rule: spike-dependent and voltage-dependent. We find that the latter provides remarkable benefits in terms of learning speed and robustness to noise. We demonstrate the capacity of our model to tackle several problems like learning multidimensional trajectories and solving the classical temporal XOR benchmark. Finally, we show that an online approximation of the gradient ascent, in addition to guaranteeing complete locality in time and space, allows learning after very few presentations of the target output. Our model can be applied to different types of biological neurons. The analytically derived plasticity learning rule is specific to each neuron model and can produce a theoretical prediction for experimental validation.


Subject(s)
Learning/physiology , Models, Neurological , Nerve Net/cytology , Nerve Net/physiology , Neurons/cytology , Action Potentials
11.
Methods Protoc ; 3(1)2020 Jan 31.
Article in English | MEDLINE | ID: mdl-32023996

ABSTRACT

Slow waves (SWs) are spatio-temporal patterns of cortical activity that occur both during natural sleep and anesthesia and are preserved across species. Even though electrophysiological recordings have been largely used to characterize brain states, they are limited in the spatial resolution and cannot target specific neuronal population. Recently, large-scale optical imaging techniques coupled with functional indicators overcame these restrictions, and new pipelines of analysis and novel approaches of SWs modelling are needed to extract relevant features of the spatio-temporal dynamics of SWs from these highly spatially resolved data-sets. Here we combined wide-field fluorescence microscopy and a transgenic mouse model expressing a calcium indicator (GCaMP6f) in excitatory neurons to study SW propagation over the meso-scale under ketamine anesthesia. We developed a versatile analysis pipeline to identify and quantify the spatio-temporal propagation of the SWs. Moreover, we designed a computational simulator based on a simple theoretical model, which takes into account the statistics of neuronal activity, the response of fluorescence proteins and the slow waves dynamics. The simulator was capable of synthesizing artificial signals that could reliably reproduce several features of the SWs observed in vivo, thus enabling a calibration tool for the analysis pipeline. Comparison of experimental and simulated data shows the robustness of the analysis tools and its potential to uncover mechanistic insights of the Slow Wave Activity (SWA).

12.
Front Syst Neurosci ; 13: 70, 2019.
Article in English | MEDLINE | ID: mdl-31824271

ABSTRACT

Cortical slow oscillations (≲1 Hz) are an emergent property of the cortical network that integrate connectivity and physiological features. This rhythm, highly revealing of the characteristics of the underlying dynamics, is a hallmark of low complexity brain states like sleep, and represents a default activity pattern. Here, we present a methodological approach for quantifying the spatial and temporal properties of this emergent activity. We improved and enriched a robust analysis procedure that has already been successfully applied to both in vitro and in vivo data acquisitions. We tested the new tools of the methodology by analyzing the electrocorticography (ECoG) traces recorded from a custom 32-channel multi-electrode array in wild-type isoflurane-anesthetized mice. The enhanced analysis pipeline, named SWAP (Slow Wave Analysis Pipeline), detects Up and Down states, enables the characterization of the spatial dependency of their statistical properties, and supports the comparison of different subjects. The SWAP is implemented in a data-independent way, allowing its application to other data sets (acquired from different subjects, or with different recording tools), as well as to the outcome of numerical simulations. By using the SWAP, we report statistically significant differences in the observed slow oscillations (SO) across cortical areas and cortical sites. Computing cortical maps by interpolating the features of SO acquired at the electrode positions, we give evidence of gradients at the global scale along an oblique axis directed from fronto-lateral toward occipito-medial regions, further highlighting some heterogeneity within cortical areas. The results obtained using the SWAP will be essential for producing data-driven brain simulations. A spatial characterization of slow oscillations will also trigger a discussion on the role of, and the interplay between, the different regions in the cortex, improving our understanding of the mechanisms of generation and propagation of delta rhythms and, more generally, of cortical properties.

13.
Front Syst Neurosci ; 13: 33, 2019.
Article in English | MEDLINE | ID: mdl-31396058

ABSTRACT

Cortical synapse organization supports a range of dynamic states on multiple spatial and temporal scales, from synchronous slow wave activity (SWA), characteristic of deep sleep or anesthesia, to fluctuating, asynchronous activity during wakefulness (AW). Such dynamic diversity poses a challenge for producing efficient large-scale simulations that embody realistic metaphors of short- and long-range synaptic connectivity. In fact, during SWA and AW different spatial extents of the cortical tissue are active in a given timespan and at different firing rates, which implies a wide variety of loads of local computation and communication. A balanced evaluation of simulation performance and robustness should therefore include tests of a variety of cortical dynamic states. Here, we demonstrate performance scaling of our proprietary Distributed and Plastic Spiking Neural Networks (DPSNN) simulation engine in both SWA and AW for bidimensional grids of neural populations, which reflects the modular organization of the cortex. We explored networks up to 192 × 192 modules, each composed of 1,250 integrate-and-fire neurons with spike-frequency adaptation, and exponentially decaying inter-modular synaptic connectivity with varying spatial decay constant. For the largest networks the total number of synapses was over 70 billion. The execution platform included up to 64 dual-socket nodes, each socket mounting 8 Intel Xeon Haswell processor cores @ 2.40 GHz clock rate. Network initialization time, memory usage, and execution time showed good scaling performances from 1 to 1,024 processes, implemented using the standard Message Passing Interface (MPI) protocol. We achieved simulation speeds of between 2.3 × 109 and 4.1 × 109 synaptic events per second for both cortical states in the explored range of inter-modular interconnections.

14.
Sci Rep ; 9(1): 8990, 2019 06 20.
Article in English | MEDLINE | ID: mdl-31222151

ABSTRACT

The occurrence of sleep passed through the evolutionary sieve and is widespread in animal species. Sleep is known to be beneficial to cognitive and mnemonic tasks, while chronic sleep deprivation is detrimental. Despite the importance of the phenomenon, a complete understanding of its functions and underlying mechanisms is still lacking. In this paper, we show interesting effects of deep-sleep-like slow oscillation activity on a simplified thalamo-cortical model which is trained to encode, retrieve and classify images of handwritten digits. During slow oscillations, spike-timing-dependent-plasticity (STDP) produces a differential homeostatic process. It is characterized by both a specific unsupervised enhancement of connections among groups of neurons associated to instances of the same class (digit) and a simultaneous down-regulation of stronger synapses created by the training. This hierarchical organization of post-sleep internal representations favours higher performances in retrieval and classification tasks. The mechanism is based on the interaction between top-down cortico-thalamic predictions and bottom-up thalamo-cortical projections during deep-sleep-like slow oscillations. Indeed, when learned patterns are replayed during sleep, cortico-thalamo-cortical connections favour the activation of other neurons coding for similar thalamic inputs, promoting their association. Such mechanism hints at possible applications to artificial learning systems.


Subject(s)
Cerebral Cortex/physiology , Homeostasis , Memory , Pattern Recognition, Visual , Sleep/physiology , Synapses/physiology , Thalamus/physiology , Algorithms , Humans , Models, Biological , Neurons , Photic Stimulation
SELECTION OF CITATIONS
SEARCH DETAIL
...