Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 5 de 5
Filter
Add more filters










Database
Language
Publication year range
1.
J Cogn Neurosci ; 34(3): 411-424, 2022 02 01.
Article in English | MEDLINE | ID: mdl-35015867

ABSTRACT

Speech and music are spectrotemporally complex acoustic signals that are highly relevant for humans. Both contain a temporal fine structure that is encoded in the neural responses of subcortical and cortical processing centers. The subcortical response to the temporal fine structure of speech has recently been shown to be modulated by selective attention to one of two competing voices. Music similarly often consists of several simultaneous melodic lines, and a listener can selectively attend to a particular one at a time. However, the neural mechanisms that enable such selective attention remain largely enigmatic, not least since most investigations to date have focused on short and simplified musical stimuli. Here, we studied the neural encoding of classical musical pieces in human volunteers, using scalp EEG recordings. We presented volunteers with continuous musical pieces composed of one or two instruments. In the latter case, the participants were asked to selectively attend to one of the two competing instruments and to perform a vibrato identification task. We used linear encoding and decoding models to relate the recorded EEG activity to the stimulus waveform. We show that we can measure neural responses to the temporal fine structure of melodic lines played by one single instrument, at the population level as well as for most individual participants. The neural response peaks at a latency of 7.6 msec and is not measurable past 15 msec. When analyzing the neural responses to the temporal fine structure elicited by competing instruments, we found no evidence of attentional modulation. We observed, however, that low-frequency neural activity exhibited a modulation consistent with the behavioral task at latencies from 100 to 160 msec, in a similar manner to the attentional modulation observed in continuous speech (N100). Our results show that, much like speech, the temporal fine structure of music is tracked by neural activity. In contrast to speech, however, this response appears unaffected by selective attention in the context of our experiment.


Subject(s)
Music , Speech Perception , Acoustic Stimulation/methods , Auditory Perception/physiology , Electroencephalography/methods , Humans , Speech , Speech Perception/physiology
2.
Neuroimage ; 200: 1-11, 2019 10 15.
Article in English | MEDLINE | ID: mdl-31212098

ABSTRACT

Humans are highly skilled at analysing complex acoustic scenes. The segregation of different acoustic streams and the formation of corresponding neural representations is mostly attributed to the auditory cortex. Decoding of selective attention from neuroimaging has therefore focussed on cortical responses to sound. However, the auditory brainstem response to speech is modulated by selective attention as well, as recently shown through measuring the brainstem's response to running speech. Although the response of the auditory brainstem has a smaller magnitude than that of the auditory cortex, it occurs at much higher frequencies and therefore has a higher information rate. Here we develop statistical models for extracting the brainstem response from multi-channel scalp recordings and for analysing the attentional modulation according to the focus of attention. We demonstrate that the attentional modulation of the brainstem response to speech can be employed to decode the attentional focus of a listener from short measurements of 10 s or less in duration. The decoding remains accurate when obtained from three EEG channels only. We further show how out-of-the-box decoding that employs subject-independent models, as well as decoding that is independent of the specific attended speaker is capable of achieving similar accuracy. These results open up new avenues for investigating the neural mechanisms for selective attention in the brainstem and for developing efficient auditory brain-computer interfaces.


Subject(s)
Attention/physiology , Cerebral Cortex/physiology , Electroencephalography/methods , Evoked Potentials, Auditory, Brain Stem/physiology , Speech Perception/physiology , Adult , Female , Humans , Male , Young Adult
3.
J Neurosci ; 39(29): 5750-5759, 2019 07 17.
Article in English | MEDLINE | ID: mdl-31109963

ABSTRACT

Humans excel at understanding speech even in adverse conditions such as background noise. Speech processing may be aided by cortical activity in the delta and theta frequency bands, which have been found to track the speech envelope. However, the rhythm of non-speech sounds is tracked by cortical activity as well. It therefore remains unclear which aspects of neural speech tracking represent the processing of acoustic features, related to the clarity of speech, and which aspects reflect higher-level linguistic processing related to speech comprehension. Here we disambiguate the roles of cortical tracking for speech clarity and comprehension through recording EEG responses to native and foreign language in different levels of background noise, for which clarity and comprehension vary independently. We then use a both a decoding and an encoding approach to relate clarity and comprehension to the neural responses. We find that cortical tracking in the theta frequency band is mainly correlated to clarity, whereas the delta band contributes most to speech comprehension. Moreover, we uncover an early neural component in the delta band that informs on comprehension and that may reflect a predictive mechanism for language processing. Our results disentangle the functional contributions of cortical speech tracking in the delta and theta bands to speech processing. They also show that both speech clarity and comprehension can be accurately decoded from relatively short segments of EEG recordings, which may have applications in future mind-controlled auditory prosthesis.SIGNIFICANCE STATEMENT Speech is a highly complex signal whose processing requires analysis from lower-level acoustic features to higher-level linguistic information. Recent work has shown that neural activity in the delta and theta frequency bands track the rhythm of speech, but the role of this tracking for speech processing remains unclear. Here we disentangle the roles of cortical entrainment in different frequency bands and at different temporal lags for speech clarity, reflecting the acoustics of the signal, and speech comprehension, related to linguistic processing. We show that cortical speech tracking in the theta frequency band encodes mostly speech clarity, and thus acoustic aspects of the signal, whereas speech tracking in the delta band encodes the higher-level speech comprehension.


Subject(s)
Acoustic Stimulation/methods , Auditory Cortex/physiology , Delta Rhythm/physiology , Noise , Speech Perception/physiology , Theta Rhythm/physiology , Adult , Electroencephalography/methods , Female , Humans , Male , Speech/physiology , Young Adult
4.
Elife ; 62017 10 10.
Article in English | MEDLINE | ID: mdl-28992445

ABSTRACT

Humans excel at selectively listening to a target speaker in background noise such as competing voices. While the encoding of speech in the auditory cortex is modulated by selective attention, it remains debated whether such modulation occurs already in subcortical auditory structures. Investigating the contribution of the human brainstem to attention has, in particular, been hindered by the tiny amplitude of the brainstem response. Its measurement normally requires a large number of repetitions of the same short sound stimuli, which may lead to a loss of attention and to neural adaptation. Here we develop a mathematical method to measure the auditory brainstem response to running speech, an acoustic stimulus that does not repeat and that has a high ecological validity. We employ this method to assess the brainstem's activity when a subject listens to one of two competing speakers, and show that the brainstem response is consistently modulated by attention.


Subject(s)
Attention , Evoked Potentials, Auditory, Brain Stem , Speech , Acoustic Stimulation , Adolescent , Adult , Female , Healthy Volunteers , Humans , Male , Models, Theoretical , Young Adult
5.
J Math Biol ; 70(3): 533-47, 2015 Feb.
Article in English | MEDLINE | ID: mdl-24623311

ABSTRACT

We consider a plant's local leaf area index as a spatially continuous variable, subject to particular reaction-diffusion dynamics of allocation, senescence and spatial propagation. The latter notably incorporates the plant's tendency to form new leaves in bright rather than shaded locations. Applying a generalized Beer-Lambert law allows to link existing foliage to production dynamics. The approach allows for inter-individual variability and competition for light while maintaining robustness-a key weakness of comparable existing models. The analysis of the single plant case leads to a significant simplification of the system's key equation when transforming it into the well studied porous medium equation. Confronting the theoretical model to experimental data of sugar beet populations, differing in configuration density, demonstrates its accuracy.


Subject(s)
Models, Biological , Plants/radiation effects , Beta vulgaris/growth & development , Beta vulgaris/radiation effects , Light , Mathematical Concepts , Phototropism , Plant Leaves/growth & development , Plant Leaves/radiation effects
SELECTION OF CITATIONS
SEARCH DETAIL
...