Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 5 de 5
Filter
Add more filters










Database
Language
Publication year range
1.
J Neurosci ; 36(49): 12338-12350, 2016 12 07.
Article in English | MEDLINE | ID: mdl-27927954

ABSTRACT

A primary goal of auditory neuroscience is to identify the sound features extracted and represented by auditory neurons. Linear encoding models, which describe neural responses as a function of the stimulus, have been primarily used for this purpose. Here, we provide theoretical arguments and experimental evidence in support of an alternative approach, based on decoding the stimulus from the neural response. We used a Bayesian normative approach to predict the responses of neurons detecting relevant auditory features, despite ambiguities and noise. We compared the model predictions to recordings from the primary auditory cortex of ferrets and found that: (1) the decoding filters of auditory neurons resemble the filters learned from the statistics of speech sounds; (2) the decoding model captures the dynamics of responses better than a linear encoding model of similar complexity; and (3) the decoding model accounts for the accuracy with which the stimulus is represented in neural activity, whereas linear encoding model performs very poorly. Most importantly, our model predicts that neuronal responses are fundamentally shaped by "explaining away," a divisive competition between alternative interpretations of the auditory scene. SIGNIFICANCE STATEMENT: Neural responses in the auditory cortex are dynamic, nonlinear, and hard to predict. Traditionally, encoding models have been used to describe neural responses as a function of the stimulus. However, in addition to external stimulation, neural activity is strongly modulated by the responses of other neurons in the network. We hypothesized that auditory neurons aim to collectively decode their stimulus. In particular, a stimulus feature that is decoded (or explained away) by one neuron is not explained by another. We demonstrated that this novel Bayesian decoding model is better at capturing the dynamic responses of cortical neurons in ferrets. Whereas the linear encoding model poorly reflects selectivity of neurons, the decoding model can account for the strong nonlinearities observed in neural data.


Subject(s)
Auditory Perception/physiology , Ferrets/physiology , Sensory Receptor Cells/physiology , Acoustic Stimulation , Algorithms , Animals , Auditory Cortex/physiology , Bayes Theorem , Female , Male , Models, Neurological , Nerve Net/cytology , Nerve Net/physiology , Noise , Phonetics
2.
Curr Biol ; 25(4): 530-5, 2015 Feb 16.
Article in English | MEDLINE | ID: mdl-25660537

ABSTRACT

At present, it is largely unclear how the human brain optimally learns foreign languages. We investigated teaching strategies that utilize complementary information ("enrichment"), such as pictures or gestures, to optimize vocabulary learning outcome. We found that learning while performing gestures was more efficient than the common practice of learning with pictures and that both enrichment strategies were better than learning without enrichment ("verbal learning"). We tested the prediction of an influential cognitive neuroscience theory that provides explanations for the beneficial behavioral effects of enrichment: the "multisensory learning theory" attributes the benefits of enrichment to recruitment of brain areas specialized in processing the enrichment. To test this prediction, we asked participants to translate auditorily presented foreign words during fMRI. Multivariate pattern classification allowed us to decode from the brain activity under which enrichment condition the vocabulary had been learned. The visual-object-sensitive lateral occipital complex (LOC) represented auditory words that had been learned with pictures. The biological motion superior temporal sulcus (bmSTS) and motor areas represented auditory words that had been learned with gestures. Importantly, brain activity in these specialized visual and motor brain areas correlated with behavioral performance. The cortical activation pattern found in the present study strongly supports the multisensory learning theory in contrast to alternative explanations. In addition, the results highlight the importance of learning foreign language vocabulary with enrichment, particularly with self-performed gestures.


Subject(s)
Language , Learning , Motor Cortex/physiology , Visual Cortex/physiology , Adult , Female , Germany , Gestures , Humans , Magnetic Resonance Imaging , Male , Photic Stimulation , Verbal Learning , Young Adult
3.
PLoS Comput Biol ; 9(9): e1003219, 2013.
Article in English | MEDLINE | ID: mdl-24068902

ABSTRACT

Our knowledge about the computational mechanisms underlying human learning and recognition of sound sequences, especially speech, is still very limited. One difficulty in deciphering the exact means by which humans recognize speech is that there are scarce experimental findings at a neuronal, microscopic level. Here, we show that our neuronal-computational understanding of speech learning and recognition may be vastly improved by looking at an animal model, i.e., the songbird, which faces the same challenge as humans: to learn and decode complex auditory input, in an online fashion. Motivated by striking similarities between the human and songbird neural recognition systems at the macroscopic level, we assumed that the human brain uses the same computational principles at a microscopic level and translated a birdsong model into a novel human sound learning and recognition model with an emphasis on speech. We show that the resulting Bayesian model with a hierarchy of nonlinear dynamical systems can learn speech samples such as words rapidly and recognize them robustly, even in adverse conditions. In addition, we show that recognition can be performed even when words are spoken by different speakers and with different accents-an everyday situation in which current state-of-the-art speech recognition models often fail. The model can also be used to qualitatively explain behavioral data on human speech learning and derive predictions for future experiments.


Subject(s)
Animal Communication , Birds/physiology , Speech , Animals , Bayes Theorem , Humans
4.
Neural Netw ; 35: 1-9, 2012 Nov.
Article in English | MEDLINE | ID: mdl-22885243

ABSTRACT

An echo state network (ESN) consists of a large, randomly connected neural network, the reservoir, which is driven by an input signal and projects to output units. During training, only the connections from the reservoir to these output units are learned. A key requisite for output-only training is the echo state property (ESP), which means that the effect of initial conditions should vanish as time passes. In this paper, we use analytical examples to show that a widely used criterion for the ESP, the spectral radius of the weight matrix being smaller than unity, is not sufficient to satisfy the echo state property. We obtain these examples by investigating local bifurcation properties of the standard ESNs. Moreover, we provide new sufficient conditions for the echo state property of standard sigmoid and leaky integrator ESNs. We furthermore suggest an improved technical definition of the echo state property, and discuss what practicians should (and should not) observe when they optimize their reservoirs for specific tasks.


Subject(s)
Neural Networks, Computer , Neurons/physiology , Nonlinear Dynamics , Algorithms , Computer Simulation , Learning , Models, Neurological , Time Factors
5.
PLoS Comput Biol ; 7(12): e1002303, 2011 Dec.
Article in English | MEDLINE | ID: mdl-22194676

ABSTRACT

The neuronal system underlying learning, generation and recognition of song in birds is one of the best-studied systems in the neurosciences. Here, we use these experimental findings to derive a neurobiologically plausible, dynamic, hierarchical model of birdsong generation and transform it into a functional model of birdsong recognition. The generation model consists of neuronal rate models and includes critical anatomical components like the premotor song-control nucleus HVC (proper name), the premotor nucleus RA (robust nucleus of the arcopallium), and a model of the syringeal and respiratory organs. We use Bayesian inference of this dynamical system to derive a possible mechanism for how birds can efficiently and robustly recognize the songs of their conspecifics in an online fashion. Our results indicate that the specific way birdsong is generated enables a listening bird to robustly and rapidly perceive embedded information at multiple time scales of a song. The resulting mechanism can be useful for investigating the functional roles of auditory recognition areas and providing predictions for future birdsong experiments.


Subject(s)
Models, Neurological , Neurons/physiology , Songbirds/physiology , Vocalization, Animal/physiology , Animals , Association Learning/physiology , Bayes Theorem , Computer Simulation , Neural Pathways/physiology
SELECTION OF CITATIONS
SEARCH DETAIL
...