Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 1 de 1
Filter
Add more filters










Database
Language
Publication year range
1.
Neural Comput ; 25(6): 1371-407, 2013 Jun.
Article in English | MEDLINE | ID: mdl-23517096

ABSTRACT

The brain faces the problem of inferring reliable hidden causes from large populations of noisy neurons, for example, the direction of a moving object from spikes in area MT. It is known that a theoretically optimal likelihood decoding could be carried out by simple linear readout neurons if weights of synaptic connections were set to certain values that depend on the tuning functions of sensory neurons. We show here that such theoretically optimal readout weights emerge autonomously through STDP in conjunction with lateral inhibition between readout neurons. In particular, we identify a class of optimal STDP learning rules with homeostatic plasticity, for which the autonomous emergence of optimal readouts can be explained on the basis of a rigorous learning theory. This theory shows that the network motif we consider approximates expectation-maximization for creating internal generative models for hidden causes of high-dimensional spike inputs. Notably, we find that this optimal functionality can be well approximated by a variety of STDP rules beyond those predicted by theory. Furthermore, we show that this learning process is very stable and automatically adjusts weights to changes in the number of readout neurons, the tuning functions of sensory neurons, and the statistics of external stimuli.


Subject(s)
Action Potentials/physiology , Models, Neurological , Neuronal Plasticity/physiology , Neurons/physiology , Animals , Brain/cytology , Inhibition, Psychological , Learning , Models, Theoretical , Nerve Net/physiology , Neural Networks, Computer , Stochastic Processes , Synapses/physiology , Synaptic Transmission , Time Factors
SELECTION OF CITATIONS
SEARCH DETAIL
...