Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 38
Filter
1.
Am J Audiol ; 28(3): 686-696, 2019 Sep 13.
Article in English | MEDLINE | ID: mdl-31430174

ABSTRACT

Purpose There is a growing body of literature that suggests a linkage between impaired auditory function, increased listening effort, and fatigue in children and adults with hearing loss. Research suggests this linkage may be associated with hearing loss-related variations in diurnal cortisol levels. Here, we examine variations in cortisol profiles between young adults with and without severe sensorineural hearing loss and examine associations between cortisol and subjective measures of listening effort and fatigue. Method This study used a repeated-measures, matched-pair design. Two groups (n = 8 per group) of adults enrolled in audiology programs participated, 1 group of adults with hearing loss (AHL) and 1 matched control group without hearing loss. Salivary cortisol samples were collected at 7 time points over a 2-week period and used to quantify physiological stress. Subjective measures of listening effort, stress, and fatigue were also collected to investigate relationships between cortisol levels, perceived stress, and fatigue. Results Subjective ratings revealed that AHL required significantly more effort and concentration on typical auditory tasks than the control group. Likewise, complaints of listening-related fatigue were more frequent and more of a problem in everyday life for AHL compared to the control group. There was a significant association between subjective ratings of listening effort and listening-related fatigue for our AHL, but not for the control group. In contrast, there was no significant difference in cortisol measures between groups, nor were there significant associations between cortisol and any subjective measure. Conclusions Young AHL experience more effortful listening than their normal hearing peers. This increased effort is associated with increased reports of listening-related fatigue. However, diurnal cortisol profiles were not significantly different between groups nor were they associated with these perceived differences.


Subject(s)
Circadian Rhythm , Deafness/rehabilitation , Fatigue/metabolism , Hydrocortisone/metabolism , Stress, Psychological/metabolism , Adult , Case-Control Studies , Cochlear Implants , Fatigue/psychology , Female , Humans , Male , Pilot Projects , Stress, Psychological/psychology , Young Adult
2.
J Am Acad Audiol ; 30(7): 579-589, 2019.
Article in English | MEDLINE | ID: mdl-30541657

ABSTRACT

PURPOSE: The aim of the study was to determine if contralateral routing of signal (CROS) technology results in improved hearing outcomes in unilateral cochlear implant (CI) patients and provides similar gains in speech perception in noise to traditional monaural listeners (MLs). RESEARCH DESIGN: The study is a prospective, within-subject repeated-measures experiment. STUDY SAMPLE: Adult, English-speaking patients with bilateral severe-profound sensorineural hearing loss using an Advanced Bionics CI (n = 12) in one ear were enrolled for the study. INTERVENTION: Hearing performance in the monaural listening condition (CI only) was compared with the CROS-aided (unilateral CI + CROS) condition. Participants were tested for speech-in-noise performance using the Bamford-Kowal-Bench Speech-in-Noise™ test materials in the speech front/noise front (0 degrees/0 degrees azimuth), speech front/noise back (0 degrees/180 degrees azimuth), speech deaf ear/noise monaural ear (90 degrees/270 degrees azimuth), and speech monaural ear/noise deaf ear (90 degrees/270 degrees azimuth) configurations. Localization error was assessed using three custom stimuli consisting of 1/3 octave narrowband noises centered at 500 and 4000 Hz and a broadband speech stimulus. Localization stimuli were presented at random in the front hemifield by 19 speakers spatially separated by 10 degrees. Outcomes were compared with a previously described group of traditional MLs in the CROS-aided condition (normal hearing ear + CROS). DATA COLLECTION AND ANALYSIS: All participants were tested acutely with no adaptation to the CROS device. Statistical analyses were performed using Wilcoxon signed rank tests for nonparametric data and paired sample. Statistical significance was set to p < 0.00625 after Bonferroni adjustment for eight tests. RESULTS: Significant benefit was observed from unaided to the CI + CROS-aided condition for listening in noise across most listening conditions with the greatest benefit observed in the speech deaf ear/noise monaural ear (90 degrees/270 degrees azimuth) condition (p < 0.0005). When compared with traditional MLs, no significant difference in decibel gain from the unaided to CROS-aided conditions was observed between participant groups. There was no improvement in localization ability in the CROS-aided condition for either participant group and no significant difference in performance between traditional MLs and unilateral CI listeners. CONCLUSIONS: These findings support that unilateral CI users are capable of achieving similar gains in speech perception to that of traditional MLs with wireless CROS. These results indicate that the use of wireless CROS stimulation in unilateral CI recipients provides increased benefit and an additional rehabilitative option for this population when bilateral implantation is not possible. The results suggest that noninvasive CROS solutions can successfully rehabilitate certain monaural listening deficits, provide improved hearing outcomes, and expand the reach of treatment in this population.


Subject(s)
Cochlear Implants , Hearing Loss, Bilateral/rehabilitation , Wireless Technology , Adolescent , Adult , Aged , Hearing , Hearing Loss, Bilateral/physiopathology , Humans , Middle Aged , Noise , Prospective Studies , Severity of Illness Index , Speech Perception , Treatment Outcome , Young Adult
3.
J Speech Lang Hear Res ; 60(8): 2360-2363, 2017 08 16.
Article in English | MEDLINE | ID: mdl-28768324

ABSTRACT

Purpose: The aim of this experiment was to compare, for patients with cochlear implants (CIs), the improvement for speech understanding in noise provided by a monaural adaptive beamformer and for two interventions that produced bilateral input (i.e., bilateral CIs and hearing preservation [HP] surgery). Method: Speech understanding scores for sentences were obtained for 10 listeners fit with a single CI. The listeners were tested with and without beamformer activated in a "cocktail party" environment with spatially separated target and maskers. Data for 10 listeners with bilateral CIs and 8 listeners with HP CIs were taken from Loiselle, Dorman, Yost, Cook, and Gifford (2016), who used the same test protocol. Results: The use of the beamformer resulted in a 31 percentage point improvement in performance; in bilateral CIs, an 18 percentage point improvement; and in HP CIs, a 20 percentage point improvement. Conclusion: A monaural adaptive beamformer can produce an improvement in speech understanding in a complex noise environment that is equal to, or greater than, the improvement produced by bilateral CIs and HP surgery.


Subject(s)
Cochlear Implants , Deafness/rehabilitation , Noise , Speech Perception , Aged , Female , Humans , Male , Middle Aged
4.
Ear Hear ; 38(2): 255-261, 2017.
Article in English | MEDLINE | ID: mdl-27941405

ABSTRACT

OBJECTIVE: The electrically-evoked stapedial reflex threshold (eSRT) has proven to be useful in setting upper stimulation levels of cochlear implant recipients. However, the literature suggests that the reflex can be difficult to observe in a significant percentage of the population. The primary goal of this investigation was to assess the difference in eSRT levels obtained with alternative acoustic admittance probe tone frequencies. DESIGN: A repeated-measures design was used to examine the effect of 3 probe tone frequencies (226, 678, and 1000 Hz) on eSRT in 23 adults with cochlear implants. RESULTS: The mean eSRT measured using the conventional probe tone of 226 Hz was significantly higher than the mean eSRT measured with use of 678 and 1000 Hz probe tones. The mean eSRT were 174, 167, and 165 charge units with use of 226, 678, and 1000 Hz probe tones, respectively. There was not a statistically significant difference between the average eSRTs for probe tones 678 and 1000 Hz. Twenty of 23 participants had eSRT at lower charge unit levels with use of either a 678 or 1000 Hz probe tone when compared with the 226 Hz probe tone. Two participants had eSRT measured with 678 or 1000 Hz probe tones that were equal in level to the eSRT measured with a 226 Hz probe tone. Only 1 participant had an eSRT that was obtained at a lower charge unit level with a 226 Hz probe tone relative to the eSRT obtained with a 678 and 1000 Hz probe tone. CONCLUSIONS: The results of this investigation demonstrate that the use of a standard 226 Hz probe tone is not ideal for measurement of the eSRT. The use of higher probe tone frequencies (i.e., 678 or 1000 Hz) resulted in lower eSRT levels when compared with the eSRT levels obtained with use of a 226 probe tone. In addition, 4 of the 23 participants included in this study did not have a measureable eSRT with use of a 226 Hz probe tone, but all of the participants had measureable eSRT with use of both the 678 and 1000 Hz probe tones. Additional work is required to understand the clinical implication of these changes in the context of cochlear implant programming.


Subject(s)
Cochlear Implantation , Cochlear Implants , Deafness/rehabilitation , Reflex, Acoustic/physiology , Action Potentials/physiology , Adult , Aged , Aged, 80 and over , Deafness/physiopathology , Electric Stimulation , Female , Humans , Male , Middle Aged , Young Adult
5.
Hear Res ; 322: 107-11, 2015 Apr.
Article in English | MEDLINE | ID: mdl-25285624

ABSTRACT

Many studies have documented the benefits to speech understanding when cochlear implant (CI) patients can access low-frequency acoustic information from the ear opposite the implant. In this study we assessed the role of three factors in determining the magnitude of bimodal benefit - (i) the level of CI-only performance, (ii) the magnitude of the hearing loss in the ear with low-frequency acoustic hearing and (iii) the type of test material. The patients had low-frequency PTAs (average of 125, 250 and 500 Hz) varying over a large range (<30 dB HL to >70 dB HL) in the ear contralateral to the implant. The patients were tested with (i) CNC words presented in quiet (n = 105) (ii) AzBio sentences presented in quiet (n = 102), (iii) AzBio sentences in noise at +10 dB signal-to-noise ratio (SNR) (n = 69), and (iv) AzBio sentences at +5 dB SNR (n = 64). We find maximum bimodal benefit when (i) CI scores are less than 60 percent correct, (ii) hearing loss is less than 60 dB HL in low-frequencies and (iii) the test material is sentences presented against a noise background. When these criteria are met, some bimodal patients can gain 40-60 percentage points in performance relative to performance with a CI. This article is part of a Special Issue entitled .


Subject(s)
Cochlear Implantation/instrumentation , Cochlear Implants , Comprehension , Hearing Loss/rehabilitation , Persons With Hearing Impairments/rehabilitation , Speech Intelligibility , Speech Perception , Acoustic Stimulation , Adult , Aged , Aged, 80 and over , Audiometry, Speech , Auditory Threshold , Electric Stimulation , Hearing Loss/diagnosis , Hearing Loss/physiopathology , Hearing Loss/psychology , Humans , Middle Aged , Noise/adverse effects , Perceptual Masking , Persons With Hearing Impairments/psychology , Prosthesis Design , Severity of Illness Index , Signal Processing, Computer-Assisted , Young Adult
6.
Ear Hear ; 35(6): 633-40, 2014.
Article in English | MEDLINE | ID: mdl-25127322

ABSTRACT

OBJECTIVES: The aims of this study were (i) to determine the magnitude of the interaural level differences (ILDs) that remain after cochlear implant (CI) signal processing and (ii) to relate the ILDs to the pattern of errors for sound source localization on the horizontal plane. DESIGN: The listeners were 16 bilateral CI patients fitted with MED-EL CIs and 34 normal-hearing listeners. The stimuli were wideband, high-pass, and low-pass noise signals. ILDs were calculated by passing signals, filtered by head-related transfer functions (HRTFs) to a Matlab simulation of MED-EL signal processing. RESULTS: For the wideband signal and high-pass signals, maximum ILDs of 15 to 17 dB in the input signal were reduced to 3 to 4 dB after CI signal processing. For the low-pass signal, ILDs were reduced to 1 to 2 dB. For wideband and high-pass signals, the largest ILDs for ±15 degree speaker locations were between 0.4 and 0.7 dB; for the ±30 degree speaker locations between 0.9 and 1.3 dB; for the 45 degree speaker locations between 2.4 and 2.9 dB; for the ±60 degree speaker locations, between 3.2 and 4.1 dB; and for the ±75 degree speaker locations between 2.7 and 3.4 dB. All of the CI patients in all the stimulus conditions showed poorer localization than the normal-hearing listeners. Localization accuracy for the CI patients was best for the wideband and high-pass signals and was poorest for the low-pass signal. CONCLUSIONS: Localization accuracy was related to the magnitude of the ILD cues available to the normal-hearing listeners and CI patients. The pattern of localization errors for the CI patients was related to the magnitude of the ILD differences among loudspeaker locations. The error patterns for the wideband and high-pass signals, suggest that, for the conditions of this experiment, patients, on an average, sorted signals on the horizontal plane into four sectors-on each side of the midline, one sector including 0, 15, and possibly 30 degree speaker locations, and a sector from 45 degree speaker locations to 75 degree speaker locations. The resolution within a sector was relatively poor.


Subject(s)
Cochlear Implantation/methods , Deaf-Blind Disorders/rehabilitation , Signal Processing, Computer-Assisted , Sound Localization , Adult , Aged , Case-Control Studies , Cochlear Implants , Deaf-Blind Disorders/physiopathology , Female , Humans , Male , Speech Perception , Young Adult
7.
Audiol Neurootol ; 19(4): 234-8, 2014.
Article in English | MEDLINE | ID: mdl-24992987

ABSTRACT

The aim of this project was to determine for bimodal cochlear implant (CI) patients, i.e. patients with low-frequency hearing in the ear contralateral to the implant, how speech understanding varies as a function of the difference in level between the CI signal and the acoustic signal. The data suggest that (1) acoustic signals perceived as significantly softer than a CI signal can contribute to speech understanding in the bimodal condition, (2) acoustic signals that are slightly softer than, or balanced with, a CI signal provide the largest benefit to speech understanding, and (3) acoustic signals presented at maximum comfortable loudness levels provide nearly as much benefit as signals that have been balanced with a CI signal.


Subject(s)
Cochlear Implants , Deafness/rehabilitation , Signal Detection, Psychological , Speech Perception , Acoustic Stimulation , Aged , Cochlear Implantation , Humans , Middle Aged , Noise
8.
Ear Hear ; 35(4): 418-22, 2014.
Article in English | MEDLINE | ID: mdl-24658601

ABSTRACT

OBJECTIVES: The goal of this study was to create and validate a new set of sentence lists that could be used to evaluate the speech-perception abilities of listeners with hearing loss in cases where adult materials are inappropriate due to difficulty level or content. The authors aimed to generate a large number of sentence lists with an equivalent level of difficulty for the evaluation of performance over time and across conditions. DESIGN: The original Pediatric AzBio sentence corpus included 450 sentences recorded from one female talker. All sentences included in the corpus were successfully repeated by kindergarten and first-grade students with normal hearing. The mean intelligibility of each sentence was estimated by processing each sentence through a cochlear implant simulation and calculating the mean percent correct score achieved by 15 normal-hearing listeners. After sorting sentences by mean percent correct scores, 320 sentences were assigned to 16 lists of equivalent difficulty. List equivalency was then validated by presenting all sentence lists, in a novel random order, to adults and children with hearing loss. A final-validation stage examined single-list comparisons from adult and pediatric listeners tested in research or clinical settings. RESULTS: The results of the simulation study allowed for the creation of 16 lists of 20 sentences. The average intelligibility of each list ranged from 78.4 to 78.7%. List equivalency was then validated, when the results of 16 adult cochlear implant users and 9 pediatric hearing aid and cochlear implant users revealed no significant differences across lists. The binomial distribution model was used to account for the inherent variability observed in the lists. This model was also used to generate 95% confidence intervals for one and two list comparisons. A retrospective analysis of 361 instances from 78 adult cochlear implant users and 48 instances from 36 pediatric cochlear implant users revealed that the 95% confidence intervals derived from the model captured 94% of all responses (385 of 409). CONCLUSIONS: The cochlear implant simulation was shown to be an effective method for estimating the intelligibility of individual sentences for use in the evaluation of cochlear implant users. Furthermore, the method used for constructing equivalent sentence lists and estimating the inherent variability of the materials has also been validated. Thus, the AzBio Pediatric Sentence Lists are equivalent and appropriate for the assessment of speech-understanding abilities of children with hearing loss as well as adults for whom performance on AzBio sentences is near the floor.


Subject(s)
Cochlear Implantation , Hearing Aids , Hearing Loss, Sensorineural/surgery , Speech Discrimination Tests/methods , Speech Perception , Adult , Child , Child, Preschool , Female , Hearing Loss, Sensorineural/rehabilitation , Humans , Male , Reproducibility of Results , Speech Intelligibility
9.
Int J Audiol ; 53(3): 159-64, 2014 Mar.
Article in English | MEDLINE | ID: mdl-24456178

ABSTRACT

OBJECTIVE: Spectral modulation detection (SMD) provides a psychoacoustic estimate of spectral resolution. The SMD threshold for an implanted ear is highly correlated with speech understanding and is thus a non-linguistic, psychoacoustic index of speech understanding. This measure, however, is time and equipment intensive and thus not practical for clinical use. Thus the purpose of the current study was to investigate the efficacy of a quick SMD task with the following three study aims: (1) to investigate the correlation between the long psychoacoustic, and quick SMD tasks, (2) to determine the test/retest variability of the quick SMD task, and (3) to evaluate the relationship between the quick SMD task and speech understanding. DESIGN: This study included a within-subjects, repeated-measures design. STUDY SAMPLE: Seventy-six adult cochlear implant recipients participated. RESULTS: The results were as follows: (1) there was a significant correlation between the long psychoacoustic, and quick SMD tasks, (2) the test-retest variability of the quick SMD task was highly significant and, (3) there was a significant positive correlation between the quick SMD task and monosyllabic word recognition. CONCLUSIONS: The results of this study represent the direct clinical translation of a research-proven task of SMD into a quick, clinically feasible format.


Subject(s)
Cochlear Implantation/instrumentation , Cochlear Implants , Correction of Hearing Impairment/instrumentation , Hearing Loss/rehabilitation , Persons With Hearing Impairments/rehabilitation , Speech Perception , Acoustic Stimulation , Adult , Aged , Aged, 80 and over , Audiometry , Comprehension , Hearing Loss/diagnosis , Hearing Loss/psychology , Humans , Middle Aged , Persons With Hearing Impairments/psychology , Predictive Value of Tests , Prosthesis Design , Psychoacoustics , Recognition, Psychology , Reproducibility of Results , Sound Spectrography , Time Factors , Young Adult
10.
Ear Hear ; 34(2): 133-41, 2013.
Article in English | MEDLINE | ID: mdl-23075632

ABSTRACT

OBJECTIVES: Patients with a cochlear implant (CI) in one ear and a hearing aid in the other ear commonly achieve the highest speech-understanding scores when they have access to both electrically and acoustically stimulated information. At issue in this study was whether a measure of auditory function in the hearing aided ear would predict the benefit to speech understanding when the information from the aided ear was added to the information from the CI. DESIGN: The subjects were 22 bimodal listeners with a CI in one ear and low-frequency acoustic hearing in the nonimplanted ear. The subjects were divided into two groups-one with mild-to-moderate low-frequency loss and one with severe-to-profound loss. Measures of auditory function included (1) audiometric thresholds at 750 Hz or lower, (2) speech-understanding scores (words in quiet and sentences in noise), and (3) spectral-modulation detection (SMD) thresholds. In the SMD task, one stimulus was a flat spectrum noise and the other was a noise with sinusoidal modulations at 1.0 peak/octave. RESULTS: Significant correlations were found among all three measures of auditory function and the benefit to speech understanding when the acoustic and electric stimulation were combined. Benefit was significantly correlated with audiometric thresholds (r = -0.814), acoustic speech understanding (r = 0.635), and SMD thresholds (r = -0.895) in the hearing aided ear. However, only the SMD threshold was significantly correlated with benefit within the group with mild-to-moderate loss (r = -0.828) and within the group with severe-to-profound loss (r = -0.896). CONCLUSIONS: The SMD threshold at 1 cycle/octave has the potential to provide clinicians with information relevant to the question of whether an ear with low-frequency hearing is likely to add to the intelligibility of speech provided by a CI.


Subject(s)
Auditory Threshold/physiology , Cochlear Implants , Hearing Aids , Hearing Loss, Sensorineural/therapy , Speech Perception/physiology , Acoustic Stimulation , Aged , Audiometry, Pure-Tone , Cochlear Implantation , Combined Modality Therapy , Humans , Middle Aged
11.
Ear Hear ; 34(2): 245-8, 2013.
Article in English | MEDLINE | ID: mdl-23183045

ABSTRACT

OBJECTIVES: The authors describe the localization and speech-understanding abilities of a patient fit with bilateral cochlear implants (CIs) for whom acoustic low-frequency hearing was preserved in both cochleae. DESIGN: Three signals were used in the localization experiments: low-pass, high-pass, and wideband noise. Speech understanding was assessed with the AzBio sentences presented in noise. RESULTS: Localization accuracy was best in the aided, bilateral acoustic hearing condition, and was poorer in both the bilateral CI condition and when the bilateral CIs were used in addition to bilateral low-frequency hearing. Speech understanding was best when low-frequency acoustic hearing was combined with at least one CI. CONCLUSIONS: The authors found that (1) for sound source localization in patients with bilateral CIs and bilateral hearing preservation, interaural level difference cues may dominate interaural time difference cues and (2) hearing-preservation surgery can be of benefit to patients fit with bilateral CIs.


Subject(s)
Hearing Loss, Sensorineural/surgery , Sound Localization/physiology , Speech Perception/physiology , Adult , Aged , Case-Control Studies , Cochlear Implantation , Hearing Loss, Sensorineural/physiopathology , Humans , Male , Middle Aged , Treatment Outcome , Young Adult
12.
J Am Acad Audiol ; 23(6): 385-95, 2012 Jun.
Article in English | MEDLINE | ID: mdl-22668760

ABSTRACT

In this article we review, and discuss the clinical implications of, five projects currently underway in the Cochlear Implant Laboratory at Arizona State University. The projects are (1) norming the AzBio sentence test, (2) comparing the performance of bilateral and bimodal cochlear implant (CI) patients in realistic listening environments, (3) accounting for the benefit provided to bimodal patients by low-frequency acoustic stimulation, (4) assessing localization by bilateral hearing aid patients and the implications of that work for hearing preservation patients, and (5) studying heart rate variability as a possible measure for quantifying the stress of listening via an implant. The long-term goals of the laboratory are to improve the performance of patients fit with cochlear implants and to understand the mechanisms, physiological or electronic, that underlie changes in performance. We began our work with cochlear implant patients in the mid-1980s and received our first grant from the National Institutes of Health (NIH) for work with implanted patients in 1989. Since that date our work with cochlear implant patients has been funded continuously by the NIH. In this report we describe some of the research currently being conducted in our laboratory.


Subject(s)
Audiology , Biomedical Technology , Cochlear Implantation , Cochlear Implants , Hearing Loss/therapy , Adult , Aged , Arizona , Auditory Perception , Biomedical Research , Female , Hearing Loss/etiology , Hearing Loss/physiopathology , Humans , Male , Middle Aged , Universities , Young Adult
13.
Ear Hear ; 33(6): e70-9, 2012.
Article in English | MEDLINE | ID: mdl-22622705

ABSTRACT

OBJECTIVES: It was hypothesized that auditory training would allow bimodal patients to combine in a better manner the low-frequency acoustic information provided by a hearing aid with the electric information provided by a cochlear implant, thus maximizing the benefit of combining acoustic (A) and electric (E) stimulation (EAS). DESIGN: Performance in quiet or in the presence of a multitalker babble at +5 dB signal to noise ratio was evaluated in seven bimodal patients before and after auditory training. The performance measures comprised identification of vowels and consonants, consonant-nucleus-consonant words, sentences, voice gender, and emotion. Baseline performance was evaluated in the A-alone, E-alone, and combined EAS conditions once per week for 3 weeks. A phonetic-contrast training protocol was used to facilitate speech perceptual learning. Patients trained at home 1 hour a day, 5 days a week, for 4 weeks with both their cochlear implant and hearing aid devices on. Performance was remeasured after the 4 weeks of training and 1 month after training stopped. RESULTS: After training, there was significant improvement in vowel, consonant, and consonant-nucleus-consonant word identification in the E and EAS conditions. The magnitude of improvement in the E condition was equivalent to that in the EAS condition. The improved performance was largely retained 1 month after training stopped. CONCLUSION: Auditory training, in the form administered in this study, can improve bimodal patients' overall speech understanding by improving E-alone performance.


Subject(s)
Acoustic Stimulation/methods , Cochlear Implantation/rehabilitation , Cochlear Implants , Deafness/rehabilitation , Hearing Aids , Speech Reception Threshold Test , Aged , Combined Modality Therapy , Female , Humans , Male , Middle Aged , Perceptual Masking , Pitch Discrimination , Sound Spectrography , Speech Acoustics , Speech Discrimination Tests
14.
Ear Hear ; 33(1): 112-7, 2012.
Article in English | MEDLINE | ID: mdl-21829134

ABSTRACT

OBJECTIVES: The goal of this study was to create and validate a new set of sentence lists that could be used to evaluate the speech perception abilities of hearing-impaired listeners and cochlear implant (CI) users. Our intention was to generate a large number of sentence lists with an equivalent level of difficulty for the evaluation of performance over time and across conditions. DESIGN: The AzBio sentence corpus includes 1000 sentences recorded from two female and two male talkers. The mean intelligibility of each sentence was estimated by processing each sentence through a five-channel CI simulation and calculating the mean percent correct score achieved by 15 normal-hearing listeners. Sentences from each talker were sorted by percent correct score, and 165 sentences were selected from each talker and were then sequentially assigned to 33 lists, each containing 20 sentences (5 sentences from each talker). List equivalency was validated by presenting all lists, in random order, to 15 CI users. RESULTS: Using sentence scores from the CI simulation study produced 33 lists of sentences with a mean score of 85% correct. The results of the validation study with CI users revealed no significant differences in percent correct scores for 29 of the 33 sentence lists. However, individual listeners demonstrated considerable variability in performance on the 29 lists. The binomial distribution model was used to account for the inherent variability observed in the lists. This model was also used to generate 95% confidence intervals for one and two list comparisons. A retrospective analysis of 172 instances where research subjects had been tested on two lists within a single condition revealed that 94% of results were accurately contained within these confidence intervals. CONCLUSIONS: The use of a five-channel CI simulation to estimate the intelligibility of individual sentences allowed for the creation of a large number of sentence lists with an equivalent level of difficulty. The results of the validation procedure with CI users found that 29 of 33 lists allowed scores that were not statistically different. However, individual listeners demonstrated considerable variability in performance across lists. This variability was accurately described by the binomial distribution model and was used to estimate the magnitude of change required to achieve statistical significance when comparing scores from one and two lists per condition. Fifteen sentence lists have been included in the AzBio Sentence Test for use in the clinical evaluation of hearing-impaired listeners and CI users. An additional eight sentence lists have been included in the Minimum Speech Test Battery to be distributed by the CI manufacturers for the evaluation of CI candidates.


Subject(s)
Cochlear Implantation , Hearing Loss/diagnosis , Hearing Loss/rehabilitation , Speech Discrimination Tests/methods , Speech Discrimination Tests/standards , Speech Perception , Adult , Female , Humans , Linguistics , Male , Reproducibility of Results , Speech Intelligibility , Tape Recording
15.
J Hear Sci ; 2(4): 9-17, 2012 Dec.
Article in English | MEDLINE | ID: mdl-24319590

ABSTRACT

In a previous paper we reported the frequency selectivity, temporal resolution, nonlinear cochlear processing, and speech recognition in quiet and in noise for 5 listeners with normal hearing (mean age 24.2 years) and 17 older listeners (mean age 68.5 years) with bilateral, mild sloping to profound sensory hearing loss (Gifford et al., 2007). Since that report, 2 additional participants with hearing loss completed experimentation for a total of 19 listeners. Of the 19 with hearing loss, 16 ultimately received a cochlear implant. The purpose of the current study was to provide information on the pre-operative psychophysical characteristics of low-frequency hearing and speech recognition abilities, and on the resultant postoperative speech recognition and associated benefit from cochlear implantation. The current preoperative data for the 16 listeners receiving cochlear implants demonstrate: 1) reduced or absent nonlinear cochlear processing at 500 Hz, 2) impaired frequency selectivity at 500 Hz, 3) normal temporal resolution at low modulation rates for a 500-Hz carrier, 4) poor speech recognition in a modulated background, and 5) highly variable speech recognition (from 0 to over 60% correct) for monosyllables in the bilaterally aided condition. As reported previously, measures of auditory function were not significantly correlated with pre- or post-operative speech recognition - with the exception of nonlinear cochlear processing and preoperative sentence recognition in quiet (p=0.008) and at +10 dB SNR (p=0.007). These correlations, however, were driven by the data obtained from two listeners who had the highest degree of nonlinearity and preoperative sentence recognition. All estimates of postoperative speech recognition performance were significantly higher than preoperative estimates for both the ear that was implanted (p<0.001) as well as for the best-aided condition (p<0.001). It can be concluded that older individuals with mild sloping to profound sensory hearing loss have very little to no residual nonlinear cochlear function, resulting in impaired frequency selectivity as well as poor speech recognition in modulated noise. These same individuals exhibit highly significant improvement in speech recognition in both quiet and noise following cochlear implantation. For older individuals with mild to profound sensorineural hearing loss who have difficulty in speech recognition with appropriately fitted hearing aids, there is little to lose in terms of psychoacoustic processing in the low-frequency region and much to gain with respect to speech recognition and overall communication benefit. These data further support the need to consider factors beyond the audiogram in determining cochlear implant candidacy, as older individuals with relatively good low-frequency hearing may exhibit vastly different speech perception abilities - illustrating the point that signal audibility is not a reliable predictor of performance on supra-threshold tasks such as speech recognition.

16.
J Hear Sci ; 2(4): EA37-EA39, 2012.
Article in English | MEDLINE | ID: mdl-25414796

ABSTRACT

BACKGROUND: Both bilateral cochlear implants (CIs) and bimodal (electric plus contralateral acoustic) stimulation can provide better speech intelligibility than a single CI. In both cases patients need to combine information from two ears into a single percept. In this paper we ask whether the physiological and psychological processes associated with aging alter the ability of bilateral and bimodal CI patients to combine information across two ears in the service of speech understanding. MATERIALS: The subjects were 61 adult, bilateral CI patients and 94 adult, bimodal patients. The test battery was composed of monosyllabic words presented in quiet and the AzBio sentences presented in quiet, at +10 and at +5 dB signal-to-noise ratio (SNR). METHODS: The subjects were tested in standard audiometric sound booths. Speech and noise were always presented from a single speaker directly in front of the listener. RESULTS: Age and bilateral or bimodal benefit were not significantly correlated for any test measure. CONCLUSIONS: Other factors being equal, both bilateral CIs and bimodal CIs can be recommended for elderly patients.

18.
Ear Hear ; 31(1): 63-9, 2010 Feb.
Article in English | MEDLINE | ID: mdl-20050394

ABSTRACT

OBJECTIVES: The aim of this study was to determine the minimum amount of low-frequency acoustic information that is required to achieve speech perception benefit in listeners with a cochlear implant in one ear and low-frequency hearing in the other ear. DESIGN: The recognition of monosyllabic words in quiet and sentences in noise was evaluated in three listening conditions: electric stimulation alone, acoustic stimulation alone, and combined electric and acoustic stimulation. The acoustic stimuli presented to the nonimplanted ear were either low-pass-filtered at 125, 250, 500, or 750 Hz, or unfiltered (wideband). RESULTS: Adding low-frequency acoustic information to electrically stimulated information led to a significant improvement in word recognition in quiet and sentence recognition in noise. Improvement was observed in the electric and acoustic stimulation condition even when the acoustic information was limited to the 125-Hz-low-passed signal. Further improvement for the sentences in noise was observed when the acoustic signal was increased to wideband. CONCLUSIONS: Information from the voice fundamental frequency (F0) region accounts for the majority of the speech perception benefit when acoustic stimulation is added to electric stimulation. We propose that, in quiet, low-frequency acoustic information leads to an improved representation of voicing, which in turn leads to a reduction in word candidates in the lexicon. In noise, the robust representation of voicing allows access to low-frequency acoustic landmarks that mark syllable structure and word boundaries. These landmarks can bootstrap word and sentence recognition.


Subject(s)
Audiometry, Speech , Cochlear Implants , Deafness/rehabilitation , Hearing Aids , Sound Spectrography , Voice , Adult , Combined Modality Therapy , Female , Humans , Male , Perceptual Masking , Prosthesis Design , Signal Processing, Computer-Assisted , Software
19.
Ear Hear ; 31(2): 195-201, 2010 Apr.
Article in English | MEDLINE | ID: mdl-19915474

ABSTRACT

OBJECTIVES: Our aim was to assess, for patients with a cochlear implant in one ear and low-frequency acoustic hearing in the contralateral ear, whether reducing the overlap in frequencies conveyed in the acoustic signal and those analyzed by the cochlear implant speech processor would improve speech recognition. DESIGN: The recognition of monosyllabic words in quiet and sentences in noise was evaluated in three listening configurations: electric stimulation alone, acoustic stimulation alone, and combined electric and acoustic stimulation. The acoustic stimuli were either unfiltered or low-pass (LP) filtered at 250, 500, or 750 Hz. The electric stimuli were either unfiltered or high-pass (HP) filtered at 250, 500, or 750 Hz. In the combined condition, the unfiltered acoustic signal was paired with the unfiltered electric signal, the 250-Hz LP acoustic signal was paired with the 250-Hz HP electric signal, the 500-Hz LP acoustic signal was paired with the 500-Hz HP electric signal, and the 750-Hz LP acoustic signal was paired with the 750-Hz HP electric signal. RESULTS: For both acoustic and electric signals, performance increased as the bandwidth increased. The highest level of performance in the combined condition was observed in the unfiltered acoustic plus unfiltered electric condition. CONCLUSIONS: Reducing the overlap in frequency representation between acoustic and electric stimulation does not increase speech understanding scores for patients who have residual hearing in the ear contralateral to the implant. We find that acoustic information <250 Hz significantly improves performance for patients who combine electric and acoustic stimulation and accounts for the majority of the speech-perception benefit when acoustic stimulation is combined with electric stimulation.


Subject(s)
Cochlear Implants , Deafness/physiopathology , Deafness/therapy , Pitch Perception , Speech Perception , Acoustic Stimulation , Aged , Aged, 80 and over , Electric Stimulation , Female , Hearing , Humans , Male , Middle Aged , Phonetics
20.
J Acoust Soc Am ; 126(3): 955-8, 2009 Sep.
Article in English | MEDLINE | ID: mdl-19739707

ABSTRACT

Speech understanding by cochlear implant listeners may be limited by their ability to perceive complex spectral envelopes. Here, spectral envelope perception was characterized by spectral modulation transfer functions in which modulation detection thresholds became poorer with increasing spectral modulation frequency (SMF). Thresholds at low SMFs, less likely to be influenced by spectral resolution, were correlated with vowel and consonant identifications [Litvak, L. M. et al. (2008). J. Acoust. Soc. Am. 122, 982-991] for the same listeners; while thresholds at higher SMFs, more likely to be affected by spectral resolution, were not. Results indicate that the perception of broadly spaced spectral features is important for speech perception.


Subject(s)
Auditory Perception , Cochlear Implants , Phonetics , Speech Perception , Acoustic Stimulation , Adult , Aged , Auditory Threshold , Humans , Loudness Perception , Middle Aged , Pattern Recognition, Physiological , Psychoacoustics , Speech
SELECTION OF CITATIONS
SEARCH DETAIL
...