Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 7 de 7
Filter
Add more filters










Database
Language
Publication year range
1.
J Neural Eng ; 14(3): 036020, 2017 06.
Article in English | MEDLINE | ID: mdl-28384124

ABSTRACT

OBJECTIVE: Conventional, multi-channel scalp electroencephalography (EEG) allows the identification of the attended speaker in concurrent-listening ('cocktail party') scenarios. This implies that EEG might provide valuable information to complement hearing aids with some form of EEG and to install a level of neuro-feedback. APPROACH: To investigate whether a listener's attentional focus can be detected from single-channel hearing-aid-compatible EEG configurations, we recorded EEG from three electrodes inside the ear canal ('in-Ear-EEG') and additionally from 64 electrodes on the scalp. In two different, concurrent listening tasks, participants (n = 7) were fitted with individualized in-Ear-EEG pieces and were either asked to attend to one of two dichotically-presented, concurrent tone streams or to one of two diotically-presented, concurrent audiobooks. A forward encoding model was trained to predict the EEG response at single EEG channels. MAIN RESULTS: Each individual participants' attentional focus could be detected from single-channel EEG response recorded from short-distance configurations consisting only of a single in-Ear-EEG electrode and an adjacent scalp-EEG electrode. The differences in neural responses to attended and ignored stimuli were consistent in morphology (i.e. polarity and latency of components) across subjects. SIGNIFICANCE: In sum, our findings show that the EEG response from a single-channel, hearing-aid-compatible configuration provides valuable information to identify a listener's focus of attention.


Subject(s)
Attention/physiology , Electroencephalography/methods , Evoked Potentials, Auditory/physiology , Pattern Recognition, Physiological/physiology , Pitch Perception/physiology , Speech Perception/physiology , Speech Production Measurement/methods , Adult , Algorithms , Female , Humans , Male , Middle Aged , Reproducibility of Results , Sensitivity and Specificity
2.
Netw Neurosci ; 1(2): 166-191, 2017.
Article in English | MEDLINE | ID: mdl-29911668

ABSTRACT

Perceptual decisions vary in the speed at which we make them. Evidence suggests that translating sensory information into perceptual decisions relies on distributed interacting neural populations, with decision speed hinging on power modulations of the neural oscillations. Yet the dependence of perceptual decisions on the large-scale network organization of coupled neural oscillations has remained elusive. We measured magnetoencephalographic signals in human listeners who judged acoustic stimuli composed of carefully titrated clouds of tone sweeps. These stimuli were used in two task contexts, in which the participants judged the overall pitch or direction of the tone sweeps. We traced the large-scale network dynamics of the source-projected neural oscillations on a trial-by-trial basis using power-envelope correlations and graph-theoretical network discovery. In both tasks, faster decisions were predicted by higher segregation and lower integration of coupled beta-band (∼16-28 Hz) oscillations. We also uncovered the brain network states that promoted faster decisions in either lower-order auditory or higher-order control brain areas. Specifically, decision speed in judging the tone sweep direction critically relied on the nodal network configurations of anterior temporal, cingulate, and middle frontal cortices. Our findings suggest that global network communication during perceptual decision-making is implemented in the human brain by large-scale couplings between beta-band neural oscillations.

3.
Front Psychol ; 5: 1422, 2014.
Article in English | MEDLINE | ID: mdl-25566113

ABSTRACT

Various aspects of linguistic experience influence the way we segment, represent, and process speech signals. The Japanese phonetic and orthographic systems represent geminate consonants (double consonants, e.g., /ss/, /kk/) in a unique way compared to other languages: one abstract representation is used to characterize the first part of geminate consonants despite the acoustic difference between two distinct realizations of geminate consonants (silence in the case of e.g., stop consonants and elongation in the case of fricative consonants). The current study tests whether this discrepancy between abstract representations and acoustic realizations influences how native speakers of Japanese perceive geminate consonants. The experiments used pseudo words containing either the geminate consonant /ss/ or a manipulated version in which the first part was replaced by silence /_s/. The sound /_s/ is acoustically similar to /ss/, yet does not occur in everyday speech. Japanese listeners demonstrated a bias to group these two types into the same category while Italian and Dutch listeners distinguished them. The results thus confirmed that distinguishing fricative geminate consonants with silence from those with sustained frication is not crucial for Japanese native listening. Based on this observation, we propose that native speakers of Japanese tend to segment geminated consonants into two parts and that the first portion of fricative geminates is perceptually similar to a silent duration. This representation is compatible with both Japanese orthography and phonology. Unlike previous studies that were inconclusive in how native speakers segment geminate consonants, our study demonstrated a relatively strong effect of Japanese specific listening. Thus the current experimental methods may open up new lines of investigation into the relationship between development of phonological representation, orthography and speech perception.

4.
PLoS One ; 8(7): e68261, 2013.
Article in English | MEDLINE | ID: mdl-23874567

ABSTRACT

Brain-computer interfaces (BCIs) are systems that use real-time analysis of neuroimaging data to determine the mental state of their user for purposes such as providing neurofeedback. Here, we investigate the feasibility of a BCI based on speech perception. Multivariate pattern classification methods were applied to single-trial EEG data collected during speech perception by native and non-native speakers. Two principal questions were asked: 1) Can differences in the perceived categories of pairs of phonemes be decoded at the single-trial level? 2) Can these same categorical differences be decoded across participants, within or between native-language groups? Results indicated that classification performance progressively increased with respect to the categorical status (within, boundary or across) of the stimulus contrast, and was also influenced by the native language of individual participants. Classifier performance showed strong relationships with traditional event-related potential measures and behavioral responses. The results of the cross-participant analysis indicated an overall increase in average classifier performance when trained on data from all participants (native and non-native). A second cross-participant classifier trained only on data from native speakers led to an overall improvement in performance for native speakers, but a reduction in performance for non-native speakers. We also found that the native language of a given participant could be decoded on the basis of EEG data with accuracy above 80%. These results indicate that electrophysiological responses underlying speech perception can be decoded at the single-trial level, and that decoding performance systematically reflects graded changes in the responses related to the phonological status of the stimuli. This approach could be used in extensions of the BCI paradigm to support perceptual learning during second language acquisition.


Subject(s)
Brain/physiology , Electrophysiological Phenomena/physiology , Speech Perception/physiology , Acoustic Stimulation/methods , Behavior/physiology , Brain-Computer Interfaces , Electroencephalography/methods , Humans , Language , Learning/physiology , Male , Multilingualism , Phonetics
5.
Front Neurosci ; 7: 265, 2013.
Article in English | MEDLINE | ID: mdl-24415996

ABSTRACT

Multivariate pattern classification methods are increasingly applied to neuroimaging data in the context of both fundamental research and in brain-computer interfacing approaches. Such methods provide a framework for interpreting measurements made at the single-trial level with respect to a set of two or more distinct mental states. Here, we define an approach in which the output of a binary classifier trained on data from an auditory mismatch paradigm can be used for online tracking of perception and as a neurofeedback signal. The auditory mismatch paradigm is known to induce distinct perceptual states related to the presentation of high- and low-probability stimuli, which are reflected in event-related potential (ERP) components such as the mismatch negativity (MMN). The first part of this paper illustrates how pattern classification methods can be applied to data collected in an MMN paradigm, including discussion of the optimization of preprocessing steps, the interpretation of features and how the performance of these methods generalizes across individual participants and measurement sessions. We then go on to show that the output of these decoding methods can be used in online settings as a continuous index of single-trial brain activation underlying perceptual discrimination. We conclude by discussing several potential domains of application, including neurofeedback, cognitive monitoring and passive brain-computer interfaces.

6.
Neuroreport ; 23(11): 653-7, 2012 Aug 01.
Article in English | MEDLINE | ID: mdl-22692551

ABSTRACT

The present study used electrophysiological and behavioral measures to investigate the perception of an English stop consonant contrast by native English listeners and by native Dutch listeners who were highly proficient in English. A /ba/-/pa/ continuum was created from a naturally produced /pa/ token by removing successive periods of aspiration, thus reducing the voice onset time. Although aspiration is a relevant cue for distinguishing voiced and unvoiced labial stop consonants (/b/ and /p/) in English, prevoicing is the primary cue used to distinguish between these categories in Dutch. In the electrophysiological experiment, participants listened to oddball sequences containing the standard /pa/ stimulus and one of three deviant stimuli while the mismatch-negativity response was measured. Participants then completed an identification task on the same stimuli. The results showed that native English participants were more sensitive to reductions in aspiration than native Dutch participants, as indicated by shifts in the category boundary, by differing within-group patterns of mismatch-negativity responses, and by larger mean evoked potential amplitudes in the native English group for two of the three deviant stimuli. This between-group difference in the sensorineural processing of aspiration cues indicates that native language experience alters the way in which the acoustic features of speech are processed in the auditory brain, even following extensive second-language training.


Subject(s)
Auditory Perception , Evoked Potentials, Auditory , Multilingualism , Phonetics , Speech Perception/physiology , Speech , Adult , Cues , Electroencephalography , Female , Humans , Male
7.
Psychol Res ; 75(2): 107-21, 2011 Mar.
Article in English | MEDLINE | ID: mdl-20574662

ABSTRACT

A study was conducted to test the effect of two different forms of real-time visual feedback on expressive percussion performance. Conservatory percussion students performed imitations of recorded teacher performances while receiving either high-level feedback on the expressive style of their performances, low-level feedback on the timing and dynamics of the performed notes, or no feedback. The high-level feedback was based on a Bayesian analysis of the performances, while the low-level feedback was based on the raw participant timing and dynamics data. Results indicated that neither form of feedback led to significantly smaller timing and dynamics errors. However, high-level feedback did lead to a higher proficiency in imitating the expressive style of the target performances, as indicated by a probabilistic measure of expressive style. We conclude that, while potentially disruptive to timing processes involved in music performance due to extraneous cognitive load, high-level visual feedback can improve participant imitations of expressive performance features.


Subject(s)
Feedback, Sensory/physiology , Learning/physiology , Music , Psychomotor Performance/physiology , Humans , Young Adult
SELECTION OF CITATIONS
SEARCH DETAIL
...