Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 2 de 2
Filter
Add more filters










Database
Language
Publication year range
1.
Elife ; 102021 07 26.
Article in English | MEDLINE | ID: mdl-34310281

ABSTRACT

For solving tasks such as recognizing a song, answering a question, or inverting a sequence of symbols, cortical microcircuits need to integrate and manipulate information that was dispersed over time during the preceding seconds. Creating biologically realistic models for the underlying computations, especially with spiking neurons and for behaviorally relevant integration time spans, is notoriously difficult. We examine the role of spike frequency adaptation in such computations and find that it has a surprisingly large impact. The inclusion of this well-known property of a substantial fraction of neurons in the neocortex - especially in higher areas of the human neocortex - moves the performance of spiking neural network models for computations on network inputs that are temporally dispersed from a fairly low level up to the performance level of the human brain.


Subject(s)
Action Potentials/physiology , Models, Neurological , Neocortex/physiology , Nerve Net/physiology , Neurons/physiology , Adaptation, Physiological , Computers, Molecular , Humans , Neural Networks, Computer
2.
Nat Commun ; 11(1): 3625, 2020 07 17.
Article in English | MEDLINE | ID: mdl-32681001

ABSTRACT

Recurrently connected networks of spiking neurons underlie the astounding information processing capabilities of the brain. Yet in spite of extensive research, how they can learn through synaptic plasticity to carry out complex network computations remains unclear. We argue that two pieces of this puzzle were provided by experimental data from neuroscience. A mathematical result tells us how these pieces need to be combined to enable biologically plausible online network learning through gradient descent, in particular deep reinforcement learning. This learning method-called e-prop-approaches the performance of backpropagation through time (BPTT), the best-known method for training recurrent neural networks in machine learning. In addition, it suggests a method for powerful on-chip learning in energy-efficient spike-based hardware for artificial intelligence.


Subject(s)
Brain/physiology , Models, Neurological , Nerve Net/physiology , Neurons/physiology , Reward , Action Potentials/physiology , Animals , Brain/cytology , Deep Learning , Humans , Mice , Neuronal Plasticity/physiology
SELECTION OF CITATIONS
SEARCH DETAIL
...