Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 5 de 5
Filter
Add more filters










Database
Language
Publication year range
1.
Trends Hear ; 26: 23312165221127589, 2022.
Article in English | MEDLINE | ID: mdl-36172759

ABSTRACT

We tested whether sensitivity to acoustic spectrotemporal modulations can be observed from reaction times for normal-hearing and impaired-hearing conditions. In a manual reaction-time task, normal-hearing listeners had to detect the onset of a ripple (with density between 0-8 cycles/octave and a fixed modulation depth of 50%), that moved up or down the log-frequency axis at constant velocity (between 0-64 Hz), in an otherwise-unmodulated broadband white-noise. Spectral and temporal modulations elicited band-pass filtered sensitivity characteristics, with fastest detection rates around 1 cycle/oct and 32 Hz for normal-hearing conditions. These results closely resemble data from other studies that typically used the modulation-depth threshold as a sensitivity criterion. To simulate hearing-impairment, stimuli were processed with a 6-channel cochlear-implant vocoder, and a hearing-aid simulation that introduced separate spectral smearing and low-pass filtering. Reaction times were always much slower compared to normal hearing, especially for the highest spectral densities. Binaural performance was predicted well by the benchmark race model of binaural independence, which models statistical facilitation of independent monaural channels. For the impaired-hearing simulations this implied a "best-of-both-worlds" principle in which the listeners relied on the hearing-aid ear to detect spectral modulations, and on the cochlear-implant ear for temporal-modulation detection. Although singular-value decomposition indicated that the joint spectrotemporal sensitivity matrix could be largely reconstructed from independent temporal and spectral sensitivity functions, in line with time-spectrum separability, a substantial inseparable spectral-temporal interaction was present in all hearing conditions. These results suggest that the reaction-time task yields a valid and effective objective measure of acoustic spectrotemporal-modulation sensitivity.


Subject(s)
Cochlear Implantation , Cochlear Implants , Speech Perception , Acoustic Stimulation , Auditory Threshold , Hearing , Humans , Reaction Time
2.
Cochlear Implants Int ; 18(5): 266-277, 2017 09.
Article in English | MEDLINE | ID: mdl-28726592

ABSTRACT

OBJECTIVES: This study aimed to improve access to high-frequency interaural level differences (ILD), by applying extreme frequency compression (FC) in the hearing aid (HA) of 13 bimodal listeners, using a cochlear implant (CI) and conventional HA in opposite ears. DESIGN: An experimental signal-adaptive frequency-lowering algorithm was tested, compressing frequencies above 160 Hz into the individual audible range of residual hearing, but only for consonants (adaptive FC), thus protecting vowel formants, with the aim to preserve speech perception. In a cross-over design with at least 5 weeks of acclimatization between sessions, bimodal performance with and without adaptive FC was compared for horizontal sound localization, speech understanding in quiet and in noise, and vowel, consonant and voice-pitch perception. RESULTS: On average, adaptive FC did not significantly affect any of the test results. Yet, two subjects who were fitted with a relatively weak frequency compression ratio, showed improved horizontal sound localization. After the study, four subjects preferred adaptive FC, four preferred standard frequency mapping, and four had no preference. Noteworthy, the subjects preferring adaptive FC were those with best performance on all tasks, both with and without adaptive FC. CONCLUSION: On a group level, extreme adaptive FC did not change sound localization and speech understanding in bimodal listeners. Possible reasons are too strong compression ratios, insufficient residual hearing or that the adaptive switching, although preserving vowel perception, may have been ineffective to produce consistent ILD cues. Individual results suggested that two subjects were able to integrate the frequency-compressed HA input with that of the CI, and benefitted from enhanced binaural cues for horizontal sound localization.


Subject(s)
Cochlear Implants , Correction of Hearing Impairment/methods , Hearing Aids , Hearing Loss/rehabilitation , Sound Localization/physiology , Speech Perception/physiology , Aged , Aged, 80 and over , Cochlear Implantation/methods , Combined Modality Therapy , Cues , Female , Hearing Loss/physiopathology , Humans , Male , Middle Aged , Noise , Pitch Perception , Treatment Outcome
3.
Hear Res ; 336: 72-82, 2016 06.
Article in English | MEDLINE | ID: mdl-27178443

ABSTRACT

Interaural differences in sound arrival time (ITD) and in level (ILD) enable us to localize sounds in the horizontal plane, and can support source segregation and speech understanding in noisy environments. It is uncertain whether these cues are also available to hearing-impaired listeners who are bimodally fitted, i.e. with a cochlear implant (CI) and a contralateral hearing aid (HA). Here, we assessed sound localization behavior of fourteen bimodal listeners, all using the same Phonak HA and an Advanced Bionics CI processor, matched with respect to loudness growth. We aimed to determine the availability and contribution of binaural (ILDs, temporal fine structure and envelope ITDs) and monaural (loudness, spectral) cues to horizontal sound localization in bimodal listeners, by systematically varying the frequency band, level and envelope of the stimuli. The sound bandwidth had a strong effect on the localization bias of bimodal listeners, although localization performance was typically poor for all conditions. Responses could be systematically changed by adjusting the frequency range of the stimulus, or by simply switching the HA and CI on and off. Localization responses were largely biased to one side, typically the CI side for broadband and high-pass filtered sounds, and occasionally to the HA side for low-pass filtered sounds. HA-aided thresholds better than 45 dB HL in the frequency range of the stimulus appeared to be a prerequisite, but not a guarantee, for the ability to indicate sound source direction. We argue that bimodal sound localization is likely based on ILD cues, even at frequencies below 1500 Hz for which the natural ILDs are small. These cues are typically perturbed in bimodal listeners, leading to a biased localization percept of sounds. The high accuracy of some listeners could result from a combination of sufficient spectral overlap and loudness balance in bimodal hearing.


Subject(s)
Cochlear Implants , Deafness/therapy , Hearing , Sound Localization , Acoustic Stimulation , Aged , Aged, 80 and over , Auditory Perception , Calibration , Cochlear Implantation , Cues , Female , Hearing Tests , Humans , Male , Middle Aged , Sound , Speech Perception
4.
Acta Otolaryngol ; 136(8): 775-81, 2016 Aug.
Article in English | MEDLINE | ID: mdl-26986743

ABSTRACT

Conclusion In users of a cochlear implant (CI) and a hearing aid (HA) in contralateral ears, frequency-dependent loudness balancing between devices did, on average, not lead to improved speech understanding as compared to broadband balancing. However, nine out of 15 bimodal subjects showed significantly better speech understanding with either one of the fittings. Objectives Sub-optimal fittings and mismatches in loudness are possible explanations for the large individual differences seen in listeners using bimodal stimulation. Methods HA gain was adjusted for soft and loud input sounds in three frequency bands (0-548, 548-1000, and >1000 Hz) to match loudness with the CI. This procedure was compared to a simple broadband balancing procedure that reflected current clinical practice. In a three-visit cross-over design with 4 weeks between sessions, speech understanding was tested in quiet and in noise and questionnaires were administered to assess benefit in real world. Results Both procedures resulted in comparable HA gains. For speech in noise, a marginal bimodal benefit of 0.3 ± 4 dB was found, with large differences between subjects and spatial configurations. Speech understanding in quiet and in noise did not differ between the two loudness balancing procedures.


Subject(s)
Cochlear Implants , Loudness Perception , Speech Perception , Aged , Female , Humans , Male , Middle Aged , Noise
5.
Ear Hear ; 37(3): 260-70, 2016.
Article in English | MEDLINE | ID: mdl-26656192

ABSTRACT

OBJECTIVES: The purpose of this study was to improve bimodal benefit in listeners using a cochlear implant (CI) and a hearing aid (HA) in contralateral ears, by matching the time constants and the number of compression channels of the automatic gain control (AGC) of the HA to the CI. Equivalent AGC was hypothesized to support a balanced loudness for dynamically changing signals like speech and improve bimodal benefit for speech understanding in quiet and with noise presented from the side(s) at 90 degree. DESIGN: Fifteen subjects participated in the study, all using the same Advanced Bionics Harmony CI processor and HA (Phonak Naida S IX UP). In a 3-visit crossover design with 4 weeks between sessions, performance was measured using a HA with a standard AGC (syllabic multichannel compression with 1 ms attack time and 50 ms release time) or an AGC that was adjusted to match that of the CI processor (dual AGC broadband compression, 3 and 240 msec attack time, 80 and 1500 msec release time). In all devices, the AGC was activated above the threshold of 63 dB SPL. The authors balanced loudness across the devices for soft and loud input sounds in 3 frequency bands (0 to 548, 548 to 1000, and >1000 Hz). Speech understanding was tested in free field in quiet and in noise for three spatial speaker configurations, with target speech always presented from the front. Single-talker noise was either presented from the CI side or the HA side, or uncorrelated stationary speech-weighted noise or single-talker noise was presented from both sides. Questionnaires were administered to assess differences in perception between the two bimodal fittings. RESULTS: Significant bimodal benefit over the CI alone was only found for the AGC-matched HA for the speech tests with single-talker noise. Compared with the standard HA, matched AGC characteristics significantly improved speech understanding in single-talker noise by 1.9 dB when noise was presented from the HA side. AGC matching increased bimodal benefit insignificantly by 0.6 dB when noise was presented from the CI implanted side, or by 0.8 (single-talker noise) and 1.1 dB (stationary noise) in the more complex configurations with two simultaneous maskers from both sides. In questionnaires, subjects rated the AGC-matched HA higher than the standard HA for understanding of one person in quiet and in noise, and for the quality of sounds. Listening to a slightly raised voice, subjects indicated increased listening comfort with matched AGCs. At the end of the study, 9 of 15 subjects preferred to take home the AGC-matched HA, 1 preferred the standard HA and 5 subjects had no preference. CONCLUSION: For bimodal listening, the AGC-matched HA outperformed the standard HA in speech understanding in noise tasks using a single competing talker and it was favored in questionnaires and in a subjective preference test. When noise was presented from the HA side, AGC matching resulted in a 1.9 dB SNR additional benefit, even though the HA was at the least favorable SNR side in this speaker configuration. Our results possibly suggest better binaural processing for matched AGCs.


Subject(s)
Cochlear Implantation , Deafness/rehabilitation , Hearing Aids , Speech Perception , Aged , Cochlear Implants , Combined Modality Therapy , Female , Humans , Male , Middle Aged
SELECTION OF CITATIONS
SEARCH DETAIL
...