Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 22
Filter
1.
J Clin Med ; 9(6)2020 Jun 05.
Article in English | MEDLINE | ID: mdl-32517138

ABSTRACT

In recent years there has been an increasing percentage of cochlear implant (CI) users who have usable residual hearing in the contralateral, nonimplanted ear, typically aided by acoustic amplification. This raises the issue of the extent to which the signal presented through the cochlear implant may influence how listeners process information in the acoustically stimulated ear. This multicenter retrospective study examined pre- to postoperative changes in speech perception in the nonimplanted ear, the implanted ear, and both together. Results in the latter two conditions showed the expected increases, but speech perception in the nonimplanted ear showed a modest yet meaningful decrease that could not be completely explained by changes in unaided thresholds, hearing aid malfunction, or several other demographic variables. Decreases in speech perception in the nonimplanted ear were more likely in individuals who had better levels of speech perception in the implanted ear, and in those who had better speech perception in the implanted than in the nonimplanted ear. This raises the possibility that, in some cases, bimodal listeners may rely on the higher quality signal provided by the implant and may disregard or even neglect the input provided by the nonimplanted ear.

2.
Ear Hear ; 40(3): 621-635, 2019.
Article in English | MEDLINE | ID: mdl-30067559

ABSTRACT

OBJECTIVES: (1) To determine the effect of hearing aid (HA) bandwidth on bimodal speech perception in a group of unilateral cochlear implant (CI) patients with diverse degrees and configurations of hearing loss in the nonimplanted ear, (2) to determine whether there are demographic and audiometric characteristics that would help to determine the appropriate HA bandwidth for a bimodal patient. DESIGN: Participants were 33 experienced bimodal device users with postlingual hearing loss. Twenty three of them had better speech perception with the CI than the HA (CI>HA group) and 10 had better speech perception with the HA than the CI (HA>CI group). Word recognition in sentences (AzBio sentences at +10 dB signal to noise ratio presented at 0° azimuth) and in isolation [CNC (consonant-nucleus-consonant) words] was measured in unimodal conditions [CI alone or HAWB, which indicates HA alone in the wideband (WB) condition] and in bimodal conditions (BMWB, BM2k, BM1k, and BM500) as the bandwidth of an actual HA was reduced from WB to 2 kHz, 1 kHz, and 500 Hz. Linear mixed-effect modeling was used to quantify the relationship between speech recognition and listening condition and to assess how audiometric or demographic covariates might influence this relationship in each group. RESULTS: For the CI>HA group, AzBio scores were significantly higher (on average) in all bimodal conditions than in the best unimodal condition (CI alone) and were highest at the BMWB condition. For CNC scores, on the other hand, there was no significant improvement over the CI-alone condition in any of the bimodal conditions. The opposite pattern was observed in the HA>CI group. CNC word scores were significantly higher in the BM2k and BMWB conditions than in the best unimodal condition (HAWB), but none of the bimodal conditions were significantly better than the best unimodal condition for AzBio sentences (and some of the restricted bandwidth conditions were actually worse). Demographic covariates did not interact significantly with bimodal outcomes, but some of the audiometric variables did. For CI>HA participants with a flatter audiometric configuration and better mid-frequency hearing, bimodal AzBio scores were significantly higher than the CI-alone score with the WB setting (BMWB) but not with other bandwidths. In contrast, CI>HA participants with more steeply sloping hearing loss and poorer mid-frequency thresholds (≥82.5 dB) had significantly higher bimodal AzBio scores in all bimodal conditions, and the BMWB did not differ significantly from the restricted bandwidth conditions. HA>CI participants with mild low-frequency hearing loss showed the highest levels of bimodal improvement over the best unimodal condition on CNC words. They were also less affected by HA bandwidth reduction compared with HA>CI participants with poorer low-frequency thresholds. CONCLUSIONS: The pattern of bimodal performance as a function of the HA bandwidth was found to be consistent with the degree and configuration of hearing loss for both patients with CI>HA performance and for those with HA>CI performance. Our results support fitting the HA for all bimodal patients with the widest bandwidth consistent with effective audibility.


Subject(s)
Cochlear Implantation , Hearing Aids , Hearing Loss, Bilateral/rehabilitation , Speech Perception , Aged , Cochlear Implants , Female , Humans , Male , Middle Aged
3.
J Am Acad Audiol ; 28(5): 404-414, 2017 May.
Article in English | MEDLINE | ID: mdl-28534731

ABSTRACT

BACKGROUND: Limited attention has been given to the effects of classroom acoustics at the college level. Many studies have reported that nonnative speakers of English are more likely to be affected by poor room acoustics than native speakers. An important question is how classroom acoustics affect speech perception of nonnative college students. PURPOSE: The combined effect of noise and reverberation on the speech recognition performance of college students who differ in age of English acquisition was evaluated under conditions simulating classrooms with reverberation times (RTs) close to ANSI recommended RTs. RESEARCH DESIGN: A mixed design was used in this study. STUDY SAMPLE: Thirty-six native and nonnative English-speaking college students with normal hearing, ages 18-28 yr, participated. INTERVENTION: Two groups of nine native participants (native monolingual [NM] and native bilingual) and two groups of nine nonnative participants (nonnative early and nonnative late) were evaluated in noise under three reverberant conditions (0.03, 0.06, and 0.08 sec). DATA COLLECTION AND ANALYSIS: A virtual test paradigm was used, which represented a signal reaching a student at the back of a classroom. Speech recognition in noise was measured using the Bamford-Kowal-Bench Speech-in-Noise (BKB-SIN) test and signal-to-noise ratio required for correct repetition of 50% of the key words in the stimulus sentences (SNR-50) was obtained for each group in each reverberant condition. A mixed-design analysis of variance was used to determine statistical significance as a function of listener group and RT. RESULTS: SNR-50 was significantly higher for nonnative listeners as compared to native listeners, and a more favorable SNR-50 was needed as RT increased. The most dramatic effect on SNR-50 was found in the group with later acquisition of English, whereas the impact of early introduction of a second language was subtler. At the ANSI standard's maximum recommended RT (0.6 sec), all groups except the NM group exhibited a mild signal-to-noise ratio (SNR) loss. At the 0.8 sec RT, all groups exhibited a mild SNR loss. CONCLUSION: Acoustics in the classroom are an important consideration for nonnative speakers who are proficient in English and enrolled in college. To address the need for a clearer speech signal by nonnative students (and for all students), universities should follow ANSI recommendations, as well as minimize background noise in occupied classrooms. Behavioral/instructional strategies should be considered to address factors that cannot be compensated for through acoustic design.


Subject(s)
Language , Speech Perception/physiology , Acoustic Stimulation , Acoustics , Adolescent , Adult , Female , Humans , Male , Multilingualism , Schools , Signal-To-Noise Ratio , Speech Acoustics , Virtual Reality , Young Adult
4.
Trends Hear ; 21: 2331216517699530, 2017 01.
Article in English | MEDLINE | ID: mdl-28351216

ABSTRACT

Ninety-four unilateral CI patients with bimodal listening experience (CI plus HA in contralateral ear) completed a questionnaire that focused on attitudes toward hearing aid use postimplantation, patterns of usage, and perceived bimodal benefits in daily life. Eighty participants continued HA use and 14 discontinued HA use at the time of the questionnaire. Participant responses provided useful information for counseling patients both before and after implantation. The majority of continuing bimodal (CI plus HA) participants reported adapting to using both devices within 3 months and also reported that they heard better bimodally in quiet, noisy, and reverberant conditions. They also perceived benefits including improved sound quality, better music enjoyment, and sometimes a perceived sense of acoustic balance. Those who discontinued HA use found either that using the HA did not provide additional benefit over the CI alone or that using the HA degraded the signal from the CI. Because there was considerable overlap in the audiograms and in speech recognition performance in the unimplanted ear between the two groups, we recommend that unilateral CI recipients are counseled to continue to use the HA in the contralateral ear postimplantation in order to determine whether or not they receive functional or perceived benefit from using both devices together.


Subject(s)
Cochlear Implantation/instrumentation , Cochlear Implants , Hearing Aids , Hearing Disorders/therapy , Persons With Hearing Impairments/rehabilitation , Self Report , Speech Perception , Acoustic Stimulation , Aged , Aged, 80 and over , Audiometry, Pure-Tone , Audiometry, Speech , Auditory Threshold , Female , Hearing , Hearing Disorders/diagnosis , Hearing Disorders/physiopathology , Hearing Disorders/psychology , Humans , Male , Middle Aged , Noise/adverse effects , Patient Compliance , Patient Satisfaction , Perceptual Masking , Persons With Hearing Impairments/psychology , Recognition, Psychology , Recovery of Function , Time Factors , Treatment Outcome
5.
Ear Hear ; 34(5): 553-61, 2013 Sep.
Article in English | MEDLINE | ID: mdl-23632973

ABSTRACT

OBJECTIVES: The purpose of this study was to determine how the bandwidth of the hearing aid (HA) fitting affects bimodal speech recognition of listeners with a cochlear implant (CI) in one ear and severe-to-profound hearing loss in the unimplanted ear (but with residual hearing sufficient for wideband amplification using National Acoustic Laboratories Revised, Profound [NAL-RP] prescriptive guidelines; unaided thresholds no poorer than 95 dB HL through 2000 Hz). DESIGN: Recognition of sentence material in quiet and in noise was measured with the CI alone and with CI plus HA as the amplification provided by the HA in the high and mid-frequency regions was systematically reduced from the wideband condition (NAL-RP prescription). Modified bandwidths included upper frequency cutoffs of 2000, 1000, or 500 Hz. RESULTS: On average, significant bimodal benefit was obtained when the HA provided amplification at all frequencies with aidable residual hearing. Limiting the HA bandwidth to only low-frequency amplification (below 1000 Hz) did not yield significant improvements in performance over listening with the CI alone. CONCLUSIONS: These data suggest the importance of providing amplification across as wide a frequency region as permitted by audiometric thresholds in the HA used by bimodal users.


Subject(s)
Cochlear Implantation , Cochlear Implants , Hearing Aids , Hearing Loss, Sensorineural/rehabilitation , Speech Perception , Aged , Aged, 80 and over , Audiometry, Speech , Auditory Threshold , Combined Modality Therapy , Hearing , Humans , Middle Aged , Noise , Sound Localization , Treatment Outcome
6.
Article in English | MEDLINE | ID: mdl-25435816

ABSTRACT

Acoustic models have been used in numerous studies over the past thirty years to simulate the percepts elicited by auditory neural prostheses. In these acoustic models, incoming signals are processed the same way as in a cochlear implant speech processor. The percepts that would be caused by electrical stimulation in a real cochlear implant are simulated by modulating the amplitude of either noise bands or sinusoids. Despite their practical usefulness these acoustic models have never been convincingly validated. This study presents a tool to conduct such validation using subjects who have a cochlear implant in one ear and have near perfect hearing in the other ear, allowing for the first time a direct perceptual comparison of the output of acoustic models to the stimulation provided by a cochlear implant.

7.
J Speech Lang Hear Res ; 55(2): 532-40, 2012 Apr.
Article in English | MEDLINE | ID: mdl-22215040

ABSTRACT

PURPOSE: To determine the feasibility of using a virtual auditory test material to evaluate reverberation and noise effects on speech recognition of pediatric cochlear implant (CI) users and to compare their performance with that of children with normal hearing. METHOD: Virtual test materials representing nonreverberant and reverberant environments were used to measure speech recognition of 7 children with CIs in quiet and in noise, and of 18 children with normal hearing in the quiet condition. Performance of CI users in noise (signal-to-noise ratio resulting in 50% performance) was compared to normative data from a previous study (Neuman, Wroblewski, Hajicek, & Rubinstein, 2010). For CI users, stimuli were sent directly to the CI speech processor via auxiliary input, whereas children with normal hearing were tested using insert phones. RESULTS: The speech recognition of children with CIs decreased significantly in the reverberant condition. There were individual differences in susceptibility to reverberation. Children with CIs also required higher signal-to-noise ratios than children with normal hearing in the reverberant condition. CONCLUSION: Direct connect testing with reverberant test materials allows assessment of speech recognition under conditions typical of classrooms and could be useful in identifying children with CIs whose performance decreases significantly in the presence of reverberation and noise.


Subject(s)
Cochlear Implantation/rehabilitation , Correction of Hearing Impairment/instrumentation , Hearing Loss, Bilateral/diagnosis , Hearing Loss, Bilateral/rehabilitation , Speech Perception/physiology , Child , Correction of Hearing Impairment/methods , Deafness/diagnosis , Deafness/physiopathology , Deafness/rehabilitation , Diagnosis, Computer-Assisted/instrumentation , Diagnosis, Computer-Assisted/methods , Environment Design , Feasibility Studies , Female , Hearing/physiology , Hearing Loss, Bilateral/physiopathology , Humans , Mainstreaming, Education/methods , Male , Noise , User-Computer Interface
8.
Ear Hear ; 31(3): 336-44, 2010 Jun.
Article in English | MEDLINE | ID: mdl-20215967

ABSTRACT

OBJECTIVES: The purpose of this study is to determine how combinations of noise levels and reverberation typical of ranges found in current classrooms will affect speech recognition performance of typically developing children with normal speech, language, and hearing and to compare their performance with that of adults with normal hearing. Speech recognition performance was measured using the Bamford-Kowal-Bench Speech in Noise test. A virtual test paradigm represented the signal reaching a student seated in the back of a classroom with a volume of 228 m and with varied reverberation time (0.3, 0.6, and 0.8 sec). The signal to noise ratios required for 50% performance (SNR-50) and for 95% performance were determined for groups of children aged 6 to 12 yrs and a group of young adults with normal hearing. DESIGN: This is a cross-sectional developmental study incorporating a repeated measures design. Experimental variables included age and reverberation time. A total of 63 children with normal hearing and typically developing speech and language and nine adults with normal hearing were tested. Nine children were included in each age group (6, 7, 8, 9, 10, 11, and 12 yrs). RESULTS: The SNR-50 increased significantly with increased reverberation and decreased significantly with increasing age. On average, children required positive SNRs for 50% performance, whereas thresholds for adults were close to 0 dB or <0 dB for the conditions tested. When reverberant SNR-50 was compared with adult SNR-50 without reverberation, adults did not exhibit an SNR loss, but children aged 6 to 8 yrs exhibited a moderate SNR loss and children aged 9 to 12 yrs exhibited a mild SNR loss. To obtain average speech recognition scores of 95% at the back of the classroom, an SNR > or = 10 dB is required for all children at the lowest reverberation time, of > or = 12 dB for children up to age 11 yrs at the 0.6-sec reverberant condition, and of > or = 15 dB for children aged 7 to 11 yrs at the 0.8-sec condition. The youngest children require even higher SNRs in the 0.8-sec condition. CONCLUSIONS: Results highlight changes in speech recognition performance with age in elementary school children listening to speech in noisy, reverberant classrooms. The more reverberant the environment, the better the SNR required. The younger the child, the better the SNR required. Results support the importance of attention to classroom acoustics and emphasize the need for maximizing SNR in classrooms, especially in classrooms designed for early childhood grades.


Subject(s)
Hearing/physiology , Language Development , Noise , Phonetics , Speech Perception/physiology , Adult , Age Factors , Auditory Threshold/physiology , Child , Child Development/physiology , Cross-Sectional Studies , Humans , Male , Reference Values , Speech Reception Threshold Test
9.
J Acoust Soc Am ; 123(4): 2264-75, 2008 Apr.
Article in English | MEDLINE | ID: mdl-18397031

ABSTRACT

As advanced signal processing algorithms have been proposed to enhance hearing protective device (HPD) performance, it is important to determine how directional microphones might affect the localization ability of users and whether they might cause safety hazards. The effect of in-the-ear microphone directivity was assessed by measuring sound source identification of speech in the horizontal plane. Recordings of speech in quiet and in noise were made with Knowles Electronic Manikin for Acoustic Research wearing bilateral in-the-ear hearing aids with microphones having adjustable directivity (omnidirectional, cardioid, hypercardioid, supercardioid). Signals were generated from 16 locations in a circular array. Sound direction identification performance of eight normal hearing listeners and eight hearing-impaired listeners revealed that directional microphones did not degrade localization performance and actually reduced the front-back and lateral localization errors made when listening through omnidirectional microphones. The summed rms speech level for the signals entering the two ears appear to serve as a cue for making front-back discriminations when using directional microphones in the experimental setting. The results of this study show that the use of matched directional microphones when worn bilaterally do not have a negative effect on the ability to localize speech in the horizontal plane and may thus be useful in HPD design.


Subject(s)
Auditory Perception , Hearing Aids , Sound Localization , Amplifiers, Electronic , Hearing Disorders/therapy , Humans , Noise , Perceptual Masking
10.
Trends Amplif ; 12(3): 169, 2008.
Article in English | MEDLINE | ID: mdl-25425869
11.
Trends Amplif ; 12(4): 281-2, 2008.
Article in English | MEDLINE | ID: mdl-25425870
12.
Ear Hear ; 28(1): 73-82, 2007 Feb.
Article in English | MEDLINE | ID: mdl-17204900

ABSTRACT

OBJECTIVE: The purpose of this study was to compare the accuracy of sound-direction identification in the horizontal plane by bilateral cochlear implant users when localization was measured with pink noise and with speech stimuli. DESIGN: Eight adults who were bilateral users of Nucleus 24 Contour devices participated in the study. All had received implants in both ears in a single surgery. Sound-direction identification was measured in a large classroom by using a nine-loudspeaker array. Localization was tested in three listening conditions (bilateral cochlear implants, left cochlear implant, and right cochlear implant), using two different stimuli (a speech stimulus and pink noise bursts) in a repeated-measures design. RESULTS: Sound-direction identification accuracy was significantly better when using two implants than when using a single implant. The mean root-mean-square error was 29 degrees for the bilateral condition, 54 degrees for the left cochlear implant, and 46.5 degrees for the right cochlear implant condition. Unilateral accuracy was similar for right cochlear implant and left cochlear implant performance. Sound-direction identification performance was similar for speech and pink noise stimuli. CONCLUSIONS: The data obtained in this study add to the growing body of evidence that sound-direction identification with bilateral cochlear implants is better than with a single implant. The similarity in localization performance obtained with the speech and pink noise supports the use of either stimulus for measuring sound-direction identification.


Subject(s)
Cochlear Implants/standards , Hearing Disorders/physiopathology , Hearing Disorders/surgery , Sound Localization , Acoustic Stimulation/methods , Adult , Aged , Humans , Middle Aged , Noise , Speech
13.
Trends Amplif ; 11(1): 5-6, 2007 Mar.
Article in English | MEDLINE | ID: mdl-25425862
14.
Trends Amplif ; 11(2): 61-2, 2007 Jun.
Article in English | MEDLINE | ID: mdl-25425863
15.
Trends Amplif ; 11(3): 141-2, 2007 Sep.
Article in English | MEDLINE | ID: mdl-25425864
16.
Trends Amplif ; 10(1): v, 2006 Mar.
Article in English | MEDLINE | ID: mdl-25425858
17.
Trends Amplif ; 10(2): 65-6, 2006 Jun.
Article in English | MEDLINE | ID: mdl-25425859
18.
Trends Amplif ; 10(3): 117-8, 2006 Sep.
Article in English | MEDLINE | ID: mdl-25425860
19.
Trends Amplif ; 9(1): vi, 2005.
Article in English | MEDLINE | ID: mdl-25425929
20.
Trends Amplif ; 9(2): 1-2, 2005.
Article in English | MEDLINE | ID: mdl-25425930
SELECTION OF CITATIONS
SEARCH DETAIL
...