Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 7 de 7
Filter
Add more filters










Database
Language
Publication year range
1.
Hear Res ; 284(1-2): 6-15, 2012 Feb.
Article in English | MEDLINE | ID: mdl-22234161

ABSTRACT

Electrical artifacts caused by the cochlear implant (CI) contaminate electroencephalographic (EEG) recordings from implanted individuals and corrupt auditory evoked potentials (AEPs). Independent component analysis (ICA) is efficient in attenuating the electrical CI artifact and AEPs can be successfully reconstructed. However the manual selection of CI artifact related independent components (ICs) obtained with ICA is unsatisfactory, since it contains expert-choices and is time consuming. We developed a new procedure to evaluate temporal and topographical properties of ICs and semi-automatically select those components representing electrical CI artifact. The CI Artifact Correction (CIAC) algorithm was tested on EEG data from two different studies. The first consists of published datasets from 18 CI users listening to environmental sounds. Compared to the manual IC selection performed by an expert the sensitivity of CIAC was 91.7% and the specificity 92.3%. After CIAC-based attenuation of CI artifacts, a high correlation between age and N1-P2 peak-to-peak amplitude was observed in the AEPs, replicating previously reported findings and further confirming the algorithm's validity. In the second study AEPs in response to pure tone and white noise stimuli from 12 CI users that had also participated in the other study were evaluated. CI artifacts were attenuated based on the IC selection performed semi-automatically by CIAC and manually by one expert. Again, a correlation between N1 amplitude and age was found. Moreover, a high test-retest reliability for AEP N1 amplitudes and latencies suggested that CIAC-based attenuation reliably preserves plausible individual response characteristics. We conclude that CIAC enables the objective and efficient attenuation of the CI artifact in EEG recordings, as it provided a reasonable reconstruction of individual AEPs. The systematic pattern of individual differences in N1 amplitudes and latencies observed with different stimuli at different sessions, strongly suggests that CIAC can overcome the electrical artifact problem. Thus CIAC facilitates the use of cortical AEPs as an objective measurement of auditory rehabilitation.


Subject(s)
Cochlear Implants , Evoked Potentials, Auditory , Acoustic Stimulation , Aged , Algorithms , Artifacts , Auditory Cortex/physiopathology , Cochlear Implants/statistics & numerical data , Deafness/physiopathology , Deafness/therapy , Electroencephalography/statistics & numerical data , Female , Humans , Male , Middle Aged
2.
Int J Audiol ; 47(7): 439-44, 2008 Jul.
Article in English | MEDLINE | ID: mdl-18574782

ABSTRACT

Auditory evoked potential (AEP) recordings often require subjects to ignore the stimuli and stay awake. In the present experiment, early (ABR), middle (MLR), and late latency (LLR) AEPs were recorded to compare the effect of five different distracting tasks: (1) doing nothing eyes open, (2) reading, (3) watching a movie, (4) solving a three-digit sum, and (5) doing nothing eyes closed (or counting the stimuli for LLR). Results showed that neither the amplitudes nor the latencies of the ABR, MLR, or LLR were affected by task. However, the amount of pre-stimulus activity (noise) or amplitude rejection was significantly and differently affected by the distracting task. For the ABR, the math task was the noisiest but, for the MLR, the amount of noise was greater when watching a movie. As for the LLR, reading and watching a movie yielded the lowest percentage of rejected traces. In conclusion, the choice of distracting task depends on the AEP being measured and should be chosen to improve the quality of the AEP traces and thus reduce recording time.


Subject(s)
Attention , Audiometry, Evoked Response/methods , Evoked Potentials, Auditory , Adult , Female , Humans , Male , Task Performance and Analysis
3.
Clin Neurophysiol ; 119(3): 576-586, 2008 Mar.
Article in English | MEDLINE | ID: mdl-18164659

ABSTRACT

OBJECTIVE: To investigate the long-term cortical changes in auditory evoked potential (AEP) asymmetries associated with profound unilateral deafness. METHODS: Electroencephalographic (EEG) recordings from 68 channels were used to measure auditory cortex responses to monaural stimulation from 7 unilaterally deaf patients and 7 audiogram-matched controls. Source localization of the AEP N100 response was carried out and regional source waveform amplitude and latency asymmetries were analysed for activity in the N100 latency range and for the middle latency response (MLR) range. RESULTS: Asymmetry indices (contralateral-ipsilateral)/(contralateral+ipsilateral) showed that matched control subjects, like normally hearing participants, produced activity in the N100 latency range that was more contralaterally dominant for left compared to right ear stimulation. Contrary to expectation, source waveforms and asymmetry indices in the MLR and N100 latency range were similar for unilaterally deaf patients, their matched controls and a group of normally hearing participants. CONCLUSIONS: Regional source waveform analysis revealed no evidence of systematic cortical changes in hemispheric asymmetries associated with long-term unilateral deafness. It is possible that a reorganization of cortical asymmetries to a 'normal' pattern had taken place in the years between deafness and testing. SIGNIFICANCE: Electrophysiological measures of auditory hemispheric asymmetries do not suggest long-term cortical reorganisation as a result of profound unilateral deafness.


Subject(s)
Deafness/physiopathology , Evoked Potentials, Auditory/physiology , Functional Laterality/physiology , Acoustic Stimulation/methods , Auditory Threshold/physiology , Brain Mapping , Electroencephalography/methods , Female , Humans , Male , Middle Aged , Sound Localization/physiology
4.
Psychophysiology ; 45(1): 20-4, 2008 Jan.
Article in English | MEDLINE | ID: mdl-17910729

ABSTRACT

Little is known about how the auditory cortex adapts to artificial input as provided by a cochlear implant (CI). We report the case of a 71-year-old profoundly deaf man, who has successfully used a unilateral CI for 4 years. Independent component analysis (ICA) of 61-channel EEG recordings could separate CI-related artifacts from auditory-evoked potentials (AEPs), even though it was the perfectly time-locked CI stimulation that caused the AEPs. AEP dipole source localization revealed contralaterally larger amplitudes in the P1-N1 range, similar to normal hearing individuals. In contrast to normal hearing individuals, the man with the CI showed a 20-ms shorter N1 latency ipsilaterally. We conclude that ICA allows the detailed study of AEPs in CI users.


Subject(s)
Cochlear Implantation/psychology , Evoked Potentials, Auditory/physiology , Sound Localization/physiology , Acoustic Stimulation , Aged , Artifacts , Data Interpretation, Statistical , Deafness/physiopathology , Deafness/psychology , Electroencephalography , Humans , Male , Models, Neurological
5.
Clin Neurophysiol ; 118(6): 1274-85, 2007 Jun.
Article in English | MEDLINE | ID: mdl-17462945

ABSTRACT

OBJECTIVE: To investigate brain asymmetries of the auditory evoked potential (AEP) N100, T-complex, and P200 in response to monaural stimulation. METHODS: Electroencephalographic (EEG) recordings from 68 channels were used to record auditory cortex responses to monaural stimulation from normal hearing participants (N=16). White-noise stimuli and 1000Hz tones were repeatedly presented to either the left or right ear. Source localization of the AEP N100 response was carried out with two symmetric regional sources placed into left and right auditory cortex. Regional source waveform amplitude and latency asymmetries were analyzed for tangential and radial activity explaining the N100, T-complex and P200 AEP components. RESULTS: Regional source waveform analysis showed that early tangential activity in the N100 latency range exhibited larger contralateral amplitudes and shorter latencies for both tone and noise monaural stimuli. Lateralized activity was significantly greater when tones or noise was presented to the left compared to the right ear (p<.001). The ear difference in the degree of lateralization arose due to hemispheric asymmetry. Significantly more tangential activity in the N100 latency range was recorded in the right compared to the left hemisphere in response to stimulation by either tones or noise (p<.001). Neither the radial activity modelling the T-complex, nor activity modelling the P200, showed robust ear or hemisphere differences. CONCLUSIONS: Regional source waveform analysis revealed that the extent of auditory evoked potential asymmetries depends on the ear and hemisphere examined. These findings have implications for future studies utilizing AEP asymmetries to examine normal auditory function or experience-related changes in the auditory cortex. SIGNIFICANCE: The right compared to the left auditory cortex may be more involved in processing monaurally presented tone and noise stimuli.


Subject(s)
Auditory Perception/physiology , Brain Mapping , Evoked Potentials, Auditory/physiology , Functional Laterality/physiology , Reaction Time/physiology , Adolescent , Adult , Analysis of Variance , Auditory Cortex/physiology , Electroencephalography/methods , Female , Humans , Male , Models, Theoretical
6.
Hear Res ; 203(1-2): 122-33, 2005 May.
Article in English | MEDLINE | ID: mdl-15855037

ABSTRACT

Much research has shown that transient evoked otoacoustic emissions (TEOAEs) can successfully separate normally hearing and hearing impaired populations. However, this finding comes from TEOAEs recorded using conventional averaging at low stimulation rates. Presenting clicks according to maximum length sequences (MLSs) enables TEOAEs to be recorded at very high stimulation rates. This study compares conventional and MLS TEOAEs in normally hearing and hearing impaired adults. Stimulus presentation rates of 40 clicks/s (conventional) and 5000 clicks/s (MLS) were used. The 'linear' TEOAEs (i.e., the directly recorded waveforms), the 'level nonlinear' (LNL) TEOAEs (i.e., those derived from two linear waveforms separated by a known difference in stimulus level) and the 'rate nonlinear' (RNL) TEOAEs (i.e., obtained by subtracting the emission recorded at 5000 clicks/s from that at 40 clicks/s at a fixed stimulus level) were examined to compare how they separated the normally hearing and hearing impaired subjects. When compared to the results for both conventional and MLS linear or LNL TEOAEs, the present study found that the RNL results best reflected the patients' hearing loss, although the conventional linear and LNL responses performed nearly as well. Only two impaired ears (2%), both with a best threshold of 30 dB HL at 1000 Hz, produced RNL responses with amplitude within the range produced by 95% of the normal group.


Subject(s)
Acoustic Stimulation/methods , Hearing Loss, Sensorineural/diagnosis , Otoacoustic Emissions, Spontaneous , Adult , Aged , Auditory Threshold , Case-Control Studies , Female , Hearing Loss, Sensorineural/physiopathology , Humans , Male , Middle Aged , Time Factors
7.
Hear Res ; 165(1-2): 128-41, 2002 Mar.
Article in English | MEDLINE | ID: mdl-12031522

ABSTRACT

A series of detailed experiments is described that investigates how a transient evoked otoacoustic emission (TEOAE) recorded to one-click stimulus is affected by the presence of a variable number of preceding clicks presented over a range of interclick intervals (ICIs). Part of the rationale was to determine if the resulting nonlinear temporal interactions could help explain the amplitude reduction seen when TEOAEs are recorded at very high click rates, as when using maximum length sequence stimulation. Amongst the findings was that the presence of a preceding train of clicks could either suppress or enhance emission amplitude, depending on the number of clicks in the train and the ICI. Results also indicated that the duration of the click trains, rather than the ICI, was the important factor in yielding the most suppressed response and that this seemed to depend on stimulus level. The results recorded at two levels also suggested that the cochlear temporal nonlinearity being monitored was in part related to the nonlinear process that determines the compressive input/output function for stimulus level. It is hypothesised that nonlinear temporal overlap of vibration patterns on the basilar membrane may underlie much of the pattern of results.


Subject(s)
Acoustic Stimulation/methods , Nonlinear Dynamics , Otoacoustic Emissions, Spontaneous/physiology , Adult , Female , Humans , Male , Reaction Time
SELECTION OF CITATIONS
SEARCH DETAIL
...