Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 6 de 6
Filter
Add more filters










Database
Language
Publication year range
1.
Int J Audiol ; 50(5): 321-33, 2011 May.
Article in English | MEDLINE | ID: mdl-21473667

ABSTRACT

OBJECTIVE: To evaluate the impact of multi-talker babble on cortical event-related potentials (ERPs), specifically the N400, in a spoken semantic priming paradigm. DESIGN: Participants listened in quiet and with background babble to word triplets, evaluating whether the third word was related to the preceding words. A temporo-spatial principal component analysis was conducted on ERPs to the first and second words (S1 and S2), processed without an overt behavioral response. One factor corresponded to the N400 and revealed greater processing negativity for unrelated as compared to related S2s in quiet and in babble. STUDY SAMPLE: Twelve young adults with normal hearing. RESULTS: Background babble had no significant impact on the N400 in the posterior region but increased neural processing negativity at anterior and central regions during the same timeframe. This differential processing negativity in babble occurred in response to S2 but not S1. Furthermore, background babble impacted processing negativity for related S2s more than unrelated S2s. CONCLUSIONS: Results suggest that speech processing in a modestly degraded listening environment alters neural activity associated with auditory working memory, attention, and semantic processing in anterior and central scalp regions.


Subject(s)
Evoked Potentials, Auditory , Noise , Speech Perception , Analysis of Variance , Female , Humans , Principal Component Analysis
2.
J Am Acad Audiol ; 20(7): 453-8, 2009.
Article in English | MEDLINE | ID: mdl-19928398

ABSTRACT

BACKGROUND: When listening to one speaker while another conversation is occurring simultaneously, we separate the competing sounds by processing physical cues such as common onset time, intensity, frequency harmonicity, and spatial location of the sound sources. Spatial location is determined in large part by differences in arrival of a sound at one ear versus the other ear, otherwise known as interaural time difference (ITD) or interaural phase difference (IPD). There is ample anecdotal evidence that middle-aged adults experience greater difficulty listening to speech in noise, even when their audiological evaluation does not reveal abnormal results. Furthermore, it has been shown that the frequency range for IPD processing is reduced in middle-aged adults compared to young adults, even though morphological changes in the auditory evoked potential (AEP) response were only observed in older adults. PURPOSE: The purpose of the current study was to examine early aging effects (< 60 years) on IPD processing in concurrent sound segregation. RESEARCH DESIGN: We examined the change AEP evoked by detection of a mistuned and/or phase-shifted second harmonic during the last 1500 msec of a 3000 msec amplitude-modulated harmonic complex. A passive listening paradigm was used. STUDY SAMPLE: Ten young (21-35 years) and 11 middle-aged (48-57 years) adults with normal hearing were included in the study. DATA COLLECTION AND ANALYSIS: Scalp electroencephalographic activity was recorded from 63 electrodes. A temporospatial principal component analysis was conducted. Spatial factor scores of individual spatial factors were the dependent variable in separate mixed-design ANOVAs for each temporal factor of interest. Stimulus type was the within-subject independent variable, and age group was the between-subject independent variable. RESULTS: Results indicated a delay in the upward P2 slope and the P2 peak latency to a sudden phase shift in the second harmonic of a harmonic complex in middle-aged adults compared to young adults. This AEP difference increased as mistuning (as a second grouping cue) decreased and remained evident when the IPD was the only grouping cue. CONCLUSIONS: We conclude that our findings reflect neurophysiologic differences between young and middle-aged adults for IPD processing in concurrent sound segregation.


Subject(s)
Age Factors , Cues , Evoked Potentials, Auditory/physiology , Perceptual Masking/physiology , Sound Localization/physiology , Acoustic Stimulation , Adult , Aged , Audiometry, Evoked Response , Differential Threshold/physiology , Female , Humans , Male , Middle Aged , Reaction Time , Young Adult
3.
Ear Hear ; 28(3): 320-31, 2007 Jun.
Article in English | MEDLINE | ID: mdl-17485981

ABSTRACT

OBJECTIVE: The goal of the current study was to identify neurophysiological indices of binaural processing in children with no history of hearing loss or listening problems. The results will guide our efforts to diagnose children for whom impaired binaural processing may contribute to difficulty understanding speech in adverse listening environments. Our main aim was to document the cortical auditory evoked potentials (AEPs) in response to interaural intensity differences (IIDs) in children. It is well known, however, that the morphology of AEPs is substantially different in children and adults. Comparison of AEPs in response to IIDs between children and adults allowed us to evaluate age-related differences in the AEP waveform of binaural processing. DESIGN: Nine children (ages 7 yr 0 mo to 9 yr 4 mo) and 11 adults (23 to 34 yr) with normal hearing and no known or suspected neurological or academic problems listened to click stimuli under earphones. Click trains consisted of broadband noise of 1-msec duration with a click rate of 100 Hz. In the experimental condition (IID-present) 50-msec intervals containing an interaural intensity difference of 20 dB were introduced periodically in the continuous stream of otherwise diotic click trains. The diotic trains alternated in intensity between 50 and 70 dB peSPL. In the baseline condition (IID-absent), the same continuous diotic click stream utilized in the IID-present condition was presented with no IID. Finally, for comparison with existing literature on AEPs in children and adults, we presented monaural click trains of 50-msec duration, like those used in the IID stimulus (but with no continuous stream) to the left ear at 70 dB peSPL, with an interstimulus interval of 750 msec. Stimuli were presented in separate blocks for each stimulus type and AEPs were recorded in a passive listening condition. RESULTS: A prominent AEP activation was present in both age groups for the IID-present condition; the IID-absent condition did not evoke substantial AEPs. Adult waveform characteristics of the AEPs to monaural clicks and IID-present around 100 and 200 msec were comparable to previous reports. The children demonstrated the expected AEP activation patterns in response to monaural clicks (i.e., positivity around 100 msec, followed by prominent negativity around 250 msec); however their AEP waveforms to IIDs were mainly comprised of a prolonged positivity around 200 to 250 msec after stimulus onset. A two-step temporal-spatial Principal Component Analysis (PCA) was used to evaluate the temporal (time) and spatial (electrode location) composition of the AEP waveforms in children and adults in response to IID-present and IID-absent conditions. Separate repeated-measures ANOVAs with factor scores as the dependent variable were conducted for each temporal factor (TF) representing the waveform deflections around 100, 200 and 250 msec (i.e., TF110, TF200, and TF255) at the frontocentral spatial factor (SF1). Significantly greater negative activation was observed in adults than in children in response to IID-present for TF110. The IID-present condition evoked a significantly greater waveform inflection for TF200 in both age groups than IID-absent. A positive going activation for TF255 was observed in the IID-present condition in children but not in adults. CONCLUSIONS: This study compared obligatory AEPs in response to binaural processing of IIDs in children and adults with normal hearing. The morphology of the AEP waveform in children was different for monaural clicks and IID-present stimuli. The difference between AEPs for monaural clicks and IID-present did not occur in adults. It is likely that polarity reversal of the AEPs in response to the IID accounts for the observed AEP morphology in children.


Subject(s)
Evoked Potentials, Auditory/physiology , Hearing/physiology , Signal Detection, Psychological/physiology , Speech Perception/physiology , Adult , Audiometry/instrumentation , Child , Electrodes , Female , Humans , Male , Time Factors
4.
J Am Acad Audiol ; 16(5): 312-26, 2005 May.
Article in English | MEDLINE | ID: mdl-16119258

ABSTRACT

Long-latency ERP components were examined for scalp activation differences in children with poor and good listening skills in response to auditory movement created by IIDs. Eighteen children were grouped based on a parent questionnaire (CHAPS; Smoski et al, 1998) and clinical evaluation by a licensed audiologist. Obligatory cortical responses were recorded to an auditory movement and an auditory control task. Results showed greatest activation at fronto-central electrode sites. P1, N1, and P2 showed no significant effects. Significant differences in N2 amplitude and latency were present between groups at the lateral electrode sites (FC3, FC4) in the auditory movement but not in the auditory control task. More specifically, good listeners exhibited predominance of activation over the right hemisphere for left-moving sounds, whereas the poor listeners exhibited symmetric activation. These results suggest that abnormal hemispheric activation may be one of the reasons behind poor listening skills observed in some school-aged children.


Subject(s)
Attention , Auditory Perception/physiology , Evoked Potentials, Auditory , Language Development Disorders/physiopathology , Acoustic Stimulation , Adolescent , Analysis of Variance , Child , Dichotic Listening Tests , Electroencephalography , Electrooculography , Female , Functional Laterality/physiology , Humans , Male
5.
Brain Res Cogn Brain Res ; 20(3): 427-37, 2004 Aug.
Article in English | MEDLINE | ID: mdl-15268920

ABSTRACT

In the current study, event-related potentials (ERPs) were utilized to assess whether ERP correlates would distinguish between prosodic and lexical-semantic information processed during the comprehension of a spoken affective message. To this end, we employed a standard oddball paradigm with stimuli varying in lexical-semantic or prosodic characteristics. An N400 component was obtained in response to all stimuli and conditions (non-targets and targets). Greater negativity in the N400 amplitude was observed in response to semantic as compared to prosodic stimuli. An anterior (P3a) positive component was increased for prosodic as compared to semantic targets. We also investigated whether an N400 and/or P3a component would be present when a stimulus carried affective semantic and affective prosodic information. The ERP structure observed in response to targets of this condition showed a reduction in the amplitude of the N400 component and an explicit anterior P3a component, significantly greater than the P3a component in response to prosodic or semantic targets. Finally, a P3b component was evoked in response to targets, regardless of communicative dimension.


Subject(s)
Emotions/physiology , Evoked Potentials, Auditory/physiology , Semantics , Speech Perception/physiology , Acoustic Stimulation , Adult , Analysis of Variance , Brain Mapping , Female , Humans
6.
Neuroreport ; 15(3): 555-9, 2004 Mar 01.
Article in English | MEDLINE | ID: mdl-15094522

ABSTRACT

The present study investigated whether event-related potentials (ERPs) reflect non-voluntary vs voluntary processing of emotional prosody. ERPs were obtained while participants processed emotional information non-voluntarily (i.e. while evaluating semantic characteristics of a stimulus) and voluntarily (i.e. while evaluating emotional characteristics of a stimulus). Results suggest that emotional prosody is processed around 160 ms after stimulus onset under non-voluntary processing conditions (when the attention is diverted from the emotional meaning of the tone of voice); and around 360 ms under voluntary processing conditions. The findings support the notion that emotional prosody is processed non-voluntarily in the comprehension of a spoken message.


Subject(s)
Cognition/physiology , Emotions/physiology , Nonverbal Communication , Acoustic Stimulation , Adult , Data Interpretation, Statistical , Electroencephalography , Evoked Potentials/physiology , Female , Humans , Male
SELECTION OF CITATIONS
SEARCH DETAIL
...