Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 21
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
Nat Neurosci ; 26(11): 1906-1915, 2023 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-37828226

RESUMO

Recognition of objects from sensory stimuli is essential for survival. To that end, sensory networks in the brain must form object representations invariant to stimulus changes, such as size, orientation and context. Although Hebbian plasticity is known to shape sensory networks, it fails to create invariant object representations in computational models, raising the question of how the brain achieves such processing. In the present study, we show that combining Hebbian plasticity with a predictive form of plasticity leads to invariant representations in deep neural network models. We derive a local learning rule that generalizes to spiking neural networks and naturally accounts for several experimentally observed properties of synaptic plasticity, including metaplasticity and spike-timing-dependent plasticity. Finally, our model accurately captures neuronal selectivity changes observed in the primate inferotemporal cortex in response to altered visual experience. Thus, we provide a plausible normative theory emphasizing the importance of predictive plasticity mechanisms for successful representational learning.


Assuntos
Aprendizagem , Primatas , Animais , Aprendizagem/fisiologia , Encéfalo/fisiologia , Redes Neurais de Computação , Neurônios/fisiologia , Plasticidade Neuronal/fisiologia , Modelos Neurológicos
2.
Front Neurosci ; 16: 951164, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-36440280

RESUMO

Spatio-temporal pattern recognition is a fundamental ability of the brain which is required for numerous real-world activities. Recent deep learning approaches have reached outstanding accuracies in such tasks, but their implementation on conventional embedded solutions is still very computationally and energy expensive. Tactile sensing in robotic applications is a representative example where real-time processing and energy efficiency are required. Following a brain-inspired computing approach, we propose a new benchmark for spatio-temporal tactile pattern recognition at the edge through Braille letter reading. We recorded a new Braille letters dataset based on the capacitive tactile sensors of the iCub robot's fingertip. We then investigated the importance of spatial and temporal information as well as the impact of event-based encoding on spike-based computation. Afterward, we trained and compared feedforward and recurrent Spiking Neural Networks (SNNs) offline using Backpropagation Through Time (BPTT) with surrogate gradients, then we deployed them on the Intel Loihi neuromorphic chip for fast and efficient inference. We compared our approach to standard classifiers, in particular to the Long Short-Term Memory (LSTM) deployed on the embedded NVIDIA Jetson GPU, in terms of classification accuracy, power, and energy consumption together with computational delay. Our results show that the LSTM reaches ~97% of accuracy, outperforming the recurrent SNN by ~17% when using continuous frame-based data instead of event-based inputs. However, the recurrent SNN on Loihi with event-based inputs is ~500 times more energy-efficient than the LSTM on Jetson, requiring a total power of only ~30 mW. This work proposes a new benchmark for tactile sensing and highlights the challenges and opportunities of event-based encoding, neuromorphic hardware, and spike-based computing for spatio-temporal pattern recognition at the edge.

3.
Proc Natl Acad Sci U S A ; 119(4)2022 01 25.
Artigo em Inglês | MEDLINE | ID: mdl-35042792

RESUMO

To rapidly process temporal information at a low metabolic cost, biological neurons integrate inputs as an analog sum, but communicate with spikes, binary events in time. Analog neuromorphic hardware uses the same principles to emulate spiking neural networks with exceptional energy efficiency. However, instantiating high-performing spiking networks on such hardware remains a significant challenge due to device mismatch and the lack of efficient training algorithms. Surrogate gradient learning has emerged as a promising training strategy for spiking networks, but its applicability for analog neuromorphic systems has not been demonstrated. Here, we demonstrate surrogate gradient learning on the BrainScaleS-2 analog neuromorphic system using an in-the-loop approach. We show that learning self-corrects for device mismatch, resulting in competitive spiking network performance on both vision and speech benchmarks. Our networks display sparse spiking activity with, on average, less than one spike per hidden neuron and input, perform inference at rates of up to 85,000 frames per second, and consume less than 200 mW. In summary, our work sets several benchmarks for low-energy spiking network processing on analog neuromorphic hardware and paves the way for future on-chip learning algorithms.


Assuntos
Redes Neurais de Computação , Potenciais de Ação/fisiologia , Algoritmos , Encéfalo/fisiologia , Computadores , Modelos Biológicos , Modelos Neurológicos , Modelos Teóricos , Neurônios/fisiologia
4.
IEEE Trans Neural Netw Learn Syst ; 33(7): 2744-2757, 2022 07.
Artigo em Inglês | MEDLINE | ID: mdl-33378266

RESUMO

Spiking neural networks are the basis of versatile and power-efficient information processing in the brain. Although we currently lack a detailed understanding of how these networks compute, recently developed optimization techniques allow us to instantiate increasingly complex functional spiking neural networks in-silico. These methods hold the promise to build more efficient non-von-Neumann computing hardware and will offer new vistas in the quest of unraveling brain circuit function. To accelerate the development of such methods, objective ways to compare their performance are indispensable. Presently, however, there are no widely accepted means for comparing the computational performance of spiking neural networks. To address this issue, we introduce two spike-based classification data sets, broadly applicable to benchmark both software and neuromorphic hardware implementations of spiking neural networks. To accomplish this, we developed a general audio-to-spiking conversion procedure inspired by neurophysiology. Furthermore, we applied this conversion to an existing and a novel speech data set. The latter is the free, high-fidelity, and word-level aligned Heidelberg digit data set that we created specifically for this study. By training a range of conventional and spiking classifiers, we show that leveraging spike timing information within these data sets is essential for good classification accuracy. These results serve as the first reference for future performance comparisons of spiking neural networks.


Assuntos
Encéfalo , Redes Neurais de Computação , Encéfalo/fisiologia , Computadores , Software
5.
Elife ; 102021 12 13.
Artigo em Inglês | MEDLINE | ID: mdl-34895468

RESUMO

To rapidly process information, neural circuits have to amplify specific activity patterns transiently. How the brain performs this nonlinear operation remains elusive. Hebbian assemblies are one possibility whereby strong recurrent excitatory connections boost neuronal activity. However, such Hebbian amplification is often associated with dynamical slowing of network dynamics, non-transient attractor states, and pathological run-away activity. Feedback inhibition can alleviate these effects but typically linearizes responses and reduces amplification gain. Here, we study nonlinear transient amplification (NTA), a plausible alternative mechanism that reconciles strong recurrent excitation with rapid amplification while avoiding the above issues. NTA has two distinct temporal phases. Initially, positive feedback excitation selectively amplifies inputs that exceed a critical threshold. Subsequently, short-term plasticity quenches the run-away dynamics into an inhibition-stabilized network state. By characterizing NTA in supralinear network models, we establish that the resulting onset transients are stimulus selective and well-suited for speedy information processing. Further, we find that excitatory-inhibitory co-tuning widens the parameter regime in which NTA is possible in the absence of persistent activity. In summary, NTA provides a parsimonious explanation for how excitatory-inhibitory co-tuning and short-term plasticity collaborate in recurrent networks to achieve transient amplification.


Assuntos
Rede Nervosa/fisiologia , Plasticidade Neuronal , Neurônios/fisiologia , Potenciais de Ação , Simulação por Computador , Humanos , Modelos Neurológicos , Sinapses/fisiologia
7.
Nat Neurosci ; 24(7): 1010-1019, 2021 07.
Artigo em Inglês | MEDLINE | ID: mdl-33986551

RESUMO

Synaptic plasticity is believed to be a key physiological mechanism for learning. It is well established that it depends on pre- and postsynaptic activity. However, models that rely solely on pre- and postsynaptic activity for synaptic changes have, so far, not been able to account for learning complex tasks that demand credit assignment in hierarchical networks. Here we show that if synaptic plasticity is regulated by high-frequency bursts of spikes, then pyramidal neurons higher in a hierarchical circuit can coordinate the plasticity of lower-level connections. Using simulations and mathematical analyses, we demonstrate that, when paired with short-term synaptic dynamics, regenerative activity in the apical dendrites and synaptic plasticity in feedback pathways, a burst-dependent learning rule can solve challenging tasks that require deep network architectures. Our results demonstrate that well-known properties of dendrites, synapses and synaptic plasticity are sufficient to enable sophisticated learning in hierarchical circuits.


Assuntos
Aprendizado Profundo , Aprendizagem/fisiologia , Modelos Neurológicos , Plasticidade Neuronal/fisiologia , Células Piramidais/fisiologia , Animais , Humanos
8.
Neuron ; 109(4): 571-575, 2021 02 17.
Artigo em Inglês | MEDLINE | ID: mdl-33600754

RESUMO

Recent research resolves the challenging problem of building biophysically plausible spiking neural models that are also capable of complex information processing. This advance creates new opportunities in neuroscience and neuromorphic engineering, which we discussed at an online focus meeting.


Assuntos
Engenharia Biomédica/tendências , Modelos Neurológicos , Redes Neurais de Computação , Neurociências/tendências , Engenharia Biomédica/métodos , Previsões , Humanos , Neurônios/fisiologia , Neurociências/métodos
9.
Neural Comput ; 33(4): 899-925, 2021 03 26.
Artigo em Inglês | MEDLINE | ID: mdl-33513328

RESUMO

Brains process information in spiking neural networks. Their intricate connections shape the diverse functions these networks perform. Yet how network connectivity relates to function is poorly understood, and the functional capabilities of models of spiking networks are still rudimentary. The lack of both theoretical insight and practical algorithms to find the necessary connectivity poses a major impediment to both studying information processing in the brain and building efficient neuromorphic hardware systems. The training algorithms that solve this problem for artificial neural networks typically rely on gradient descent. But doing so in spiking networks has remained challenging due to the nondifferentiable nonlinearity of spikes. To avoid this issue, one can employ surrogate gradients to discover the required connectivity. However, the choice of a surrogate is not unique, raising the question of how its implementation influences the effectiveness of the method. Here, we use numerical simulations to systematically study how essential design parameters of surrogate gradients affect learning performance on a range of classification problems. We show that surrogate gradient learning is robust to different shapes of underlying surrogate derivatives, but the choice of the derivative's scale can substantially affect learning performance. When we combine surrogate gradients with suitable activity regularization techniques, spiking networks perform robust information processing at the sparse activity limit. Our study provides a systematic account of the remarkable robustness of surrogate gradient learning and serves as a practical guide to model functional spiking neural networks.


Assuntos
Redes Neurais de Computação , Neurônios , Algoritmos , Encéfalo , Aprendizagem
10.
Nat Neurosci ; 22(11): 1761-1770, 2019 11.
Artigo em Inglês | MEDLINE | ID: mdl-31659335

RESUMO

Systems neuroscience seeks explanations for how the brain implements a wide variety of perceptual, cognitive and motor tasks. Conversely, artificial intelligence attempts to design computational systems based on the tasks they will have to solve. In artificial neural networks, the three components specified by design are the objective functions, the learning rules and the architectures. With the growing success of deep learning, which utilizes brain-inspired architectures, these three designed components have increasingly become central to how we model, engineer and optimize complex artificial learning systems. Here we argue that a greater focus on these components would also benefit systems neuroscience. We give examples of how this optimization-based framework can drive theoretical and experimental progress in neuroscience. We contend that this principled perspective on systems neuroscience will help to generate more rapid progress.


Assuntos
Inteligência Artificial , Aprendizado Profundo , Redes Neurais de Computação , Animais , Encéfalo/fisiologia , Humanos
11.
J Physiol ; 596(20): 4945-4967, 2018 10.
Artigo em Inglês | MEDLINE | ID: mdl-30051910

RESUMO

KEY POINTS: During the computation of sound localization, neurons of the lateral superior olive (LSO) integrate synaptic excitation arising from the ipsilateral ear with inhibition from the contralateral ear. We characterized the functional connectivity of the inhibitory and excitatory inputs onto LSO neurons in terms of unitary synaptic strength and convergence. Unitary IPSCs can generate large conductances, although their strength varies over a 10-fold range in a given recording. By contrast, excitatory inputs are relatively weak. The conductance associated with IPSPs needs to be at least 2-fold stronger than the excitatory one to guarantee effective inhibition of action potential (AP) firing. Computational modelling showed that strong unitary inhibition ensures an appropriate slope and midpoint of the tuning curve of LSO neurons. Conversely, weak but numerous excitatory inputs filter out spontaneous AP firing from upstream auditory neurons. ABSTRACT: The lateral superior olive (LSO) is a binaural nucleus in the auditory brainstem in which excitation from the ipsilateral ear is integrated with inhibition from the contralateral ear. It is unknown whether the strength of the unitary inhibitory and excitatory inputs is adapted to allow for optimal tuning curves of LSO neuron action potential (AP) firing. Using electrical and optogenetic stimulation of afferent synapses, we found that the strength of unitary inhibitory inputs to a given LSO neuron can vary over a ∼10-fold range, follows a roughly log-normal distribution, and, on average, causes a large conductance (9 nS). Conversely, unitary excitatory inputs, stimulated optogenetically under the bushy-cell specific promoter Math5, were numerous, and each caused a small conductance change (0.7 nS). Approximately five to seven bushy cell inputs had to be active simultaneously to bring an LSO neuron to fire. In double stimulation experiments, the effective inhibition window caused by IPSPs was short (1-3 ms) and its length depended on the inhibitory conductance; an ∼2-fold stronger inhibition than excitation was needed to suppress AP firing. Computational modelling suggests that few, but strong, unitary IPSPs create a tuning curve of LSO neuron firing with an appropriate slope and midpoint. Furthermore, weak but numerous excitatory inputs reduce the spontaneous AP firing that LSO neurons would otherwise inherit from their upstream auditory neurons. Thus, the specific connectivity and strength of unitary excitatory and inhibitory inputs to LSO neurons is optimized for the computations performed by these binaural neurons.


Assuntos
Potenciais Pós-Sinápticos Excitadores , Potenciais Pós-Sinápticos Inibidores , Localização de Som , Complexo Olivar Superior/fisiologia , Animais , Feminino , Masculino , Camundongos , Camundongos Endogâmicos C57BL , Neurônios/metabolismo , Neurônios/fisiologia , Complexo Olivar Superior/citologia
12.
Neural Comput ; 30(6): 1514-1541, 2018 06.
Artigo em Inglês | MEDLINE | ID: mdl-29652587

RESUMO

A vast majority of computation in the brain is performed by spiking neural networks. Despite the ubiquity of such spiking, we currently lack an understanding of how biological spiking neural circuits learn and compute in vivo, as well as how we can instantiate such capabilities in artificial spiking circuits in silico. Here we revisit the problem of supervised learning in temporally coding multilayer spiking neural networks. First, by using a surrogate gradient approach, we derive SuperSpike, a nonlinear voltage-based three-factor learning rule capable of training multilayer networks of deterministic integrate-and-fire neurons to perform nonlinear computations on spatiotemporal spike patterns. Second, inspired by recent results on feedback alignment, we compare the performance of our learning rule under different credit assignment strategies for propagating output errors to hidden units. Specifically, we test uniform, symmetric, and random feedback, finding that simpler tasks can be solved with any type of feedback, while more complex tasks require symmetric feedback. In summary, our results open the door to obtaining a better scientific understanding of learning and computation in spiking neural networks by advancing our ability to train them to solve nonlinear problems involving transformations between different spatiotemporal spike time patterns.

13.
Curr Opin Neurobiol ; 43: 166-176, 2017 04.
Artigo em Inglês | MEDLINE | ID: mdl-28431369

RESUMO

Hebbian plasticity, a synaptic mechanism which detects and amplifies co-activity between neurons, is considered a key ingredient underlying learning and memory in the brain. However, Hebbian plasticity alone is unstable, leading to runaway neuronal activity, and therefore requires stabilization by additional compensatory processes. Traditionally, a diversity of homeostatic plasticity phenomena found in neural circuits is thought to play this role. However, recent modelling work suggests that the slow evolution of homeostatic plasticity, as observed in experiments, is insufficient to prevent instabilities originating from Hebbian plasticity. To remedy this situation, we suggest that homeostatic plasticity is complemented by additional rapid compensatory processes, which rapidly stabilize neuronal activity on short timescales.


Assuntos
Homeostase , Aprendizagem/fisiologia , Plasticidade Neuronal/fisiologia , Neurônios/fisiologia , Sinapses , Fatores de Tempo
14.
Artigo em Inglês | MEDLINE | ID: mdl-28093557

RESUMO

We review a body of theoretical and experimental research on Hebbian and homeostatic plasticity, starting from a puzzling observation: while homeostasis of synapses found in experiments is a slow compensatory process, most mathematical models of synaptic plasticity use rapid compensatory processes (RCPs). Even worse, with the slow homeostatic plasticity reported in experiments, simulations of existing plasticity models cannot maintain network stability unless further control mechanisms are implemented. To solve this paradox, we suggest that in addition to slow forms of homeostatic plasticity there are RCPs which stabilize synaptic plasticity on short timescales. These rapid processes may include heterosynaptic depression triggered by episodes of high postsynaptic firing rate. While slower forms of homeostatic plasticity are not sufficient to stabilize Hebbian plasticity, they are important for fine-tuning neural circuits. Taken together we suggest that learning and memory rely on an intricate interplay of diverse plasticity mechanisms on different timescales which jointly ensure stability and plasticity of neural circuits.This article is part of the themed issue 'Integrating Hebbian and homeostatic plasticity'.


Assuntos
Homeostase , Aprendizagem , Memória , Plasticidade Neuronal , Animais , Modelos Neurológicos
15.
Proc Mach Learn Res ; 70: 3987-3995, 2017.
Artigo em Inglês | MEDLINE | ID: mdl-31909397

RESUMO

While deep learning has led to remarkable advances across diverse applications, it struggles in domains where the data distribution changes over the course of learning. In stark contrast, biological neural networks continually adapt to changing domains, possibly by leveraging complex molecular machinery to solve many tasks simultaneously. In this study, we introduce intelligent synapses that bring some of this biological complexity into artificial neural networks. Each synapse accumulates task relevant information over time, and exploits this information to rapidly store new memories without forgetting old ones. We evaluate our approach on continual learning of classification tasks, and show that it dramatically reduces forgetting while maintaining computational efficiency.

17.
Nat Commun ; 6: 6922, 2015 Apr 21.
Artigo em Inglês | MEDLINE | ID: mdl-25897632

RESUMO

Synaptic plasticity, the putative basis of learning and memory formation, manifests in various forms and across different timescales. Here we show that the interaction of Hebbian homosynaptic plasticity with rapid non-Hebbian heterosynaptic plasticity is, when complemented with slower homeostatic changes and consolidation, sufficient for assembly formation and memory recall in a spiking recurrent network model of excitatory and inhibitory neurons. In the model, assemblies were formed during repeated sensory stimulation and characterized by strong recurrent excitatory connections. Even days after formation, and despite ongoing network activity and synaptic plasticity, memories could be recalled through selective delay activity following the brief stimulation of a subset of assembly neurons. Blocking any component of plasticity prevented stable functioning as a memory network. Our modelling results suggest that the diversity of plasticity phenomena in the brain is orchestrated towards achieving common functional goals.


Assuntos
Modelos Biológicos , Rede Nervosa/fisiologia , Plasticidade Neuronal , Animais , Simulação por Computador
18.
J Neurosci ; 35(3): 1319-34, 2015 Jan 21.
Artigo em Inglês | MEDLINE | ID: mdl-25609644

RESUMO

Synaptic plasticity, a key process for memory formation, manifests itself across different time scales ranging from a few seconds for plasticity induction up to hours or even years for consolidation and memory retention. We developed a three-layered model of synaptic consolidation that accounts for data across a large range of experimental conditions. Consolidation occurs in the model through the interaction of the synaptic efficacy with a scaffolding variable by a read-write process mediated by a tagging-related variable. Plasticity-inducing stimuli modify the efficacy, but the state of tag and scaffold can only change if a write protection mechanism is overcome. Our model makes a link from depotentiation protocols in vitro to behavioral results regarding the influence of novelty on inhibitory avoidance memory in rats.


Assuntos
Potenciais de Ação/fisiologia , Modelos Neurológicos , Plasticidade Neuronal/fisiologia , Neurônios/fisiologia , Sinapses/fisiologia , Animais , Simulação por Computador
19.
Front Neuroinform ; 8: 76, 2014.
Artigo em Inglês | MEDLINE | ID: mdl-25309418

RESUMO

To understand how the central nervous system performs computations using recurrent neuronal circuitry, simulations have become an indispensable tool for theoretical neuroscience. To study neuronal circuits and their ability to self-organize, increasing attention has been directed toward synaptic plasticity. In particular spike-timing-dependent plasticity (STDP) creates specific demands for simulations of spiking neural networks. On the one hand a high temporal resolution is required to capture the millisecond timescale of typical STDP windows. On the other hand network simulations have to evolve over hours up to days, to capture the timescale of long-term plasticity. To do this efficiently, fast simulation speed is the crucial ingredient rather than large neuron numbers. Using different medium-sized network models consisting of several thousands of neurons and off-the-shelf hardware, we compare the simulation speed of the simulators: Brian, NEST and Neuron as well as our own simulator Auryn. Our results show that real-time simulations of different plastic network models are possible in parallel simulations in which numerical precision is not a primary concern. Even so, the speed-up margin of parallelism is limited and boosting simulation speeds beyond one tenth of real-time is difficult. By profiling simulation code we show that the run times of typical plastic network simulations encounter a hard boundary. This limit is partly due to latencies in the inter-process communications and thus cannot be overcome by increased parallelism. Overall, these results show that to study plasticity in medium-sized spiking neural networks, adequate simulation tools are readily available which run efficiently on small clusters. However, to run simulations substantially faster than real-time, special hardware is a prerequisite.

20.
PLoS Comput Biol ; 9(11): e1003330, 2013.
Artigo em Inglês | MEDLINE | ID: mdl-24244138

RESUMO

Hebbian changes of excitatory synapses are driven by and further enhance correlations between pre- and postsynaptic activities. Hence, Hebbian plasticity forms a positive feedback loop that can lead to instability in simulated neural networks. To keep activity at healthy, low levels, plasticity must therefore incorporate homeostatic control mechanisms. We find in numerical simulations of recurrent networks with a realistic triplet-based spike-timing-dependent plasticity rule (triplet STDP) that homeostasis has to detect rate changes on a timescale of seconds to minutes to keep the activity stable. We confirm this result in a generic mean-field formulation of network activity and homeostatic plasticity. Our results strongly suggest the existence of a homeostatic regulatory mechanism that reacts to firing rate changes on the order of seconds to minutes.


Assuntos
Homeostase/fisiologia , Modelos Neurológicos , Plasticidade Neuronal/fisiologia , Neurônios/fisiologia , Sinapses/fisiologia , Potenciais de Ação , Biologia Computacional , Simulação por Computador
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...