Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 6 de 6
Filter
Add more filters










Database
Language
Publication year range
1.
Curr Biol ; 25(19): 2457-65, 2015 Oct 05.
Article in English | MEDLINE | ID: mdl-26412129

ABSTRACT

The human ability to understand speech is underpinned by a hierarchical auditory system whose successive stages process increasingly complex attributes of the acoustic input. It has been suggested that to produce categorical speech perception, this system must elicit consistent neural responses to speech tokens (e.g., phonemes) despite variations in their acoustics. Here, using electroencephalography (EEG), we provide evidence for this categorical phoneme-level speech processing by showing that the relationship between continuous speech and neural activity is best described when that speech is represented using both low-level spectrotemporal information and categorical labeling of phonetic features. Furthermore, the mapping between phonemes and EEG becomes more discriminative for phonetic features at longer latencies, in line with what one might expect from a hierarchical system. Importantly, these effects are not seen for time-reversed speech. These findings may form the basis for future research on natural language processing in specific cohorts of interest and for broader insights into how brains transform acoustic input into meaning.


Subject(s)
Auditory Cortex/physiology , Electroencephalography/methods , Speech/physiology , Humans , Male , Phonetics , Sound
2.
J Neurosci ; 35(18): 7256-63, 2015 May 06.
Article in English | MEDLINE | ID: mdl-25948273

ABSTRACT

The human brain has evolved to operate effectively in highly complex acoustic environments, segregating multiple sound sources into perceptually distinct auditory objects. A recent theory seeks to explain this ability by arguing that stream segregation occurs primarily due to the temporal coherence of the neural populations that encode the various features of an individual acoustic source. This theory has received support from both psychoacoustic and functional magnetic resonance imaging (fMRI) studies that use stimuli which model complex acoustic environments. Termed stochastic figure-ground (SFG) stimuli, they are composed of a "figure" and background that overlap in spectrotemporal space, such that the only way to segregate the figure is by computing the coherence of its frequency components over time. Here, we extend these psychoacoustic and fMRI findings by using the greater temporal resolution of electroencephalography to investigate the neural computation of temporal coherence. We present subjects with modified SFG stimuli wherein the temporal coherence of the figure is modulated stochastically over time, which allows us to use linear regression methods to extract a signature of the neural processing of this temporal coherence. We do this under both active and passive listening conditions. Our findings show an early effect of coherence during passive listening, lasting from ∼115 to 185 ms post-stimulus. When subjects are actively listening to the stimuli, these responses are larger and last longer, up to ∼265 ms. These findings provide evidence for early and preattentive neural computations of temporal coherence that are enhanced by active analysis of an auditory scene.


Subject(s)
Acoustic Stimulation/methods , Auditory Pathways/physiology , Auditory Perception/physiology , Brain Mapping/methods , Psychoacoustics , Adult , Electroencephalography/methods , Female , Humans , Magnetic Resonance Imaging/methods , Male , Time Factors , Young Adult
3.
Annu Int Conf IEEE Eng Med Biol Soc ; 2015: 5740-3, 2015 Aug.
Article in English | MEDLINE | ID: mdl-26737596

ABSTRACT

Recently it has been shown to be possible to ascertain which speaker a subject is attending to in a cocktail party environment from single-trial (~60s) electroencephalography (EEG) data. The attentional selection of most of subjects could be decoded with a very high accuracy (>90%). However, the performance of many subjects fell below what would be required for a potential brain computer interface (BCI). One potential reason for this is that activity related to the stimuli may have a lower signal-to-noise ratio on the scalp for some subjects than others. Independent component analysis (ICA) is a commonly used method for denoising EEG data. However, its effective use often requires the subjective choice of the experimenter to determine which independent components (ICs) to retain and which to reject. Algorithms do exist to automatically determine the reliability of ICs, however they provide no information as to their relevance for the task at hand. Here we introduce a novel method for automatically selecting ICs which are relevant for decoding attentional selection. In doing so, we show a significant increase in classification accuracy at all test data durations from 60s to 10s. These findings have implications for the future development of naturalistic and user-friendly BCIs, as well as for smart hearing aids.


Subject(s)
Attention , Algorithms , Brain , Brain-Computer Interfaces , Electroencephalography , Humans , Reproducibility of Results , Signal Processing, Computer-Assisted , Signal-To-Noise Ratio
4.
Cereb Cortex ; 25(7): 1697-706, 2015 Jul.
Article in English | MEDLINE | ID: mdl-24429136

ABSTRACT

How humans solve the cocktail party problem remains unknown. However, progress has been made recently thanks to the realization that cortical activity tracks the amplitude envelope of speech. This has led to the development of regression methods for studying the neurophysiology of continuous speech. One such method, known as stimulus-reconstruction, has been successfully utilized with cortical surface recordings and magnetoencephalography (MEG). However, the former is invasive and gives a relatively restricted view of processing along the auditory hierarchy, whereas the latter is expensive and rare. Thus it would be extremely useful for research in many populations if stimulus-reconstruction was effective using electroencephalography (EEG), a widely available and inexpensive technology. Here we show that single-trial (≈60 s) unaveraged EEG data can be decoded to determine attentional selection in a naturalistic multispeaker environment. Furthermore, we show a significant correlation between our EEG-based measure of attention and performance on a high-level attention task. In addition, by attempting to decode attention at individual latencies, we identify neural processing at ∼200 ms as being critical for solving the cocktail party problem. These findings open up new avenues for studying the ongoing dynamics of cognition using EEG and for developing effective and natural brain-computer interfaces.


Subject(s)
Attention/physiology , Brain/physiology , Electroencephalography/methods , Signal Processing, Computer-Assisted , Speech Perception/physiology , Acoustic Stimulation , Adult , Female , Humans , Male , Neuropsychological Tests , Time Factors
5.
Article in English | MEDLINE | ID: mdl-25570209

ABSTRACT

Recently it has been shown to be possible to ascertain the target of a subject's attention in a cocktail party environment from single-trial (~60 s) electroencephalography (EEG) data. Specifically, this was shown in the context of a dichotic listening paradigm where subjects were cued to attend to a story in one ear while ignoring a different story in the other and were required to answer questions on both stories. This paradigm resulted in a high decoding accuracy that correlated with task performance across subjects. Here, we extend this finding by showing that the ability to accurately decode attentional selection in a dichotic speech paradigm is robust to the particular attention task at hand. Subjects attended to one of two dichotically presented stories under four task conditions. These conditions required subjects to 1) answer questions on the content of both stories, 2) detect irregular frequency fluctuations in the voice of the attended speaker 3) answer questions on both stories and detect frequency fluctuations in the attended story, and 4) detect target words in the attended story. All four tasks led to high decoding accuracy (~89%). These results offer new possibilities for creating user-friendly brain computer interfaces (BCIs).


Subject(s)
Attention/physiology , Auditory Perception/physiology , Electroencephalography/methods , Humans , Task Performance and Analysis
6.
Article in English | MEDLINE | ID: mdl-24110309

ABSTRACT

Traditionally, the use of electroencephalography (EEG) to study the neural processing of natural stimuli in humans has been hampered by the need to repeatedly present discrete stimuli. Progress has been made recently by the realization that cortical population activity tracks the amplitude envelope of speech stimuli. This has led to studies using linear regression methods which allow the presentation of continuous speech. One such method, known as stimulus reconstruction, has so far only been utilized in multi-electrode cortical surface recordings and magnetoencephalography (MEG). Here, in two studies, we show that such an approach is also possible with EEG, despite the poorer signal-to-noise ratio of the data. In the first study, we show that it is possible to decode attention in a naturalistic cocktail party scenario on a single trial (≈60 s) basis. In the second, we show that the representation of the envelope of auditory speech in the cortex is more robust when accompanied by visual speech. The sensitivity of this inexpensive, widely-accessible technology for the online monitoring of natural stimuli has implications for the design of future studies of the cocktail party problem and for the implementation of EEG-based brain-computer interfaces.


Subject(s)
Attention/physiology , Electroencephalography/methods , Speech/physiology , Visual Perception/physiology , Acoustic Stimulation , Adult , Behavior , Female , Humans , Male
SELECTION OF CITATIONS
SEARCH DETAIL
...