Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 139
Filter
1.
PeerJ ; 12: e17104, 2024.
Article in English | MEDLINE | ID: mdl-38680894

ABSTRACT

Advancements in cochlear implants (CIs) have led to a significant increase in bilateral CI users, especially among children. Yet, most bilateral CI users do not fully achieve the intended binaural benefit due to potential limitations in signal processing and/or surgical implant positioning. One crucial auditory cue that normal hearing (NH) listeners can benefit from is the interaural time difference (ITD), i.e., the time difference between the arrival of a sound at two ears. The ITD sensitivity is thought to be heavily relying on the effective utilization of temporal fine structure (very rapid oscillations in sound). Unfortunately, most current CIs do not transmit such true fine structure. Nevertheless, bilateral CI users have demonstrated sensitivity to ITD cues delivered through envelope or interaural pulse time differences, i.e., the time gap between the pulses delivered to the two implants. However, their ITD sensitivity is significantly poorer compared to NH individuals, and it further degrades at higher CI stimulation rates, especially when the rate exceeds 300 pulse per second. The overall purpose of this research thread is to improve spatial hearing abilities in bilateral CI users. This study aims to develop electroencephalography (EEG) paradigms that can be used with clinical settings to assess and optimize the delivery of ITD cues, which are crucial for spatial hearing in everyday life. The research objective of this article was to determine the effect of CI stimulation pulse rate on the ITD sensitivity, and to characterize the rate-dependent degradation in ITD perception using EEG measures. To develop protocols for bilateral CI studies, EEG responses were obtained from NH listeners using sinusoidal-amplitude-modulated (SAM) tones and filtered clicks with changes in either fine structure ITD (ITDFS) or envelope ITD (ITDENV). Multiple EEG responses were analyzed, which included the subcortical auditory steady-state responses (ASSRs) and cortical auditory evoked potentials (CAEPs) elicited by stimuli onset, offset, and changes. Results indicated that acoustic change complex (ACC) responses elicited by ITDENV changes were significantly smaller or absent compared to those elicited by ITDFS changes. The ACC morphologies evoked by ITDFS changes were similar to onset and offset CAEPs, although the peak latencies were longest for ACC responses and shortest for offset CAEPs. The high-frequency stimuli clearly elicited subcortical ASSRs, but smaller than those evoked by lower carrier frequency SAM tones. The 40-Hz ASSRs decreased with increasing carrier frequencies. Filtered clicks elicited larger ASSRs compared to high-frequency SAM tones, with the order being 40 > 160 > 80> 320 Hz ASSR for both stimulus types. Wavelet analysis revealed a clear interaction between detectable transient CAEPs and 40-Hz ASSRs in the time-frequency domain for SAM tones with a low carrier frequency.


Subject(s)
Cochlear Implants , Cues , Electroencephalography , Humans , Electroencephalography/methods , Acoustic Stimulation/methods , Sound Localization/physiology , Auditory Perception/physiology , Evoked Potentials, Auditory/physiology , Time Factors
2.
J Exp Psychol Hum Percept Perform ; 49(11): 1377-1394, 2023 Nov.
Article in English | MEDLINE | ID: mdl-37870818

ABSTRACT

Moment-to-moment variations in hearing and speech perception have long been observed. Depending on the researcher's theoretical position, the observed fluctuations have been attributed to measurement error or to internal, nonsensory factors such as fluctuations in attention. While cognitive performance has been shown to fluctuate from day to day over longer time, such fluctuations have not been quantified for speech perception, despite being well-recognized by clinical audiologists and hearing-impaired patients. In three studies, we aimed to explore and quantify the magnitude of daily variability in speech perception and to investigate whether such variability goes beyond test unreliability. We also asked whether intraindividual variability depends on overall speech perception performance as observed in different groups of individuals. Older adults with objective hearing impairment and mostly hearing aids (N1 = 45), with subjective hearing problems but no hearing aids (N2 = 113), and younger adults without hearing problems (N3 = 20) participated in three ecological momentary assessment studies. They performed a digit-in-noise test two to three times a day for several weeks. Variance heterogeneous linear mixed-effects models indicated reliable intraindividual variability in speech perception and substantial individual differences in daily variability. A protective factor against daily fluctuations is a higher average speech perception. These studies show that day-to-day variations in speech perception cannot simply be attributed to test unreliability and pave the way for investigating how psychological states that do not vary from moment-to-moment, but rather from day to day, predict variations in speech perception. (PsycInfo Database Record (c) 2023 APA, all rights reserved).


Subject(s)
Speech Perception , Humans , Aged , Hearing , Noise
3.
Int J Audiol ; : 1-13, 2023 Apr 20.
Article in English | MEDLINE | ID: mdl-37079087

ABSTRACT

OBJECTIVE: This study investigated the adjustment behaviour of hearing aid (HA) users participating in a semi-supervised self-adjustment fine-tuning procedure for HAs. The aim was to link behaviour with the reproducibility and duration of the adjustments. DESIGN: Participants used a two-dimensional user interface to identify their HA gain preferences while listening to realistic sound scenes presented in a laboratory environment. The interface allowed participants to adjust amplitude (vertical axis) and spectral slope (horizontal axis) simultaneously. Participants were clustered according to their interaction with the user interface, and their search directions were analysed. STUDY SAMPLE: Twenty older experienced HA users were invited to participate in this study. RESULTS: We identified four different archetypes of adjustment behaviour (curious, cautious, semi-browsing, and full-on browsing) by analysing the trace points of all measurements for each participant. Furthermore, participants used predominantly horizontal or vertical paths when searching for their preference. Neither the archetype, nor the search directions, nor the participants' technology commitment was predictive of the reproducibility or the adjustment duration. CONCLUSIONS: The findings suggest that enforcement of a specific adjustment behaviour or search direction is not necessary to obtain fast, reliable self-adjustments. Furthermore, no strict requirements with respect to technology commitment are necessary.

4.
Int J Audiol ; 62(2): 159-171, 2023 02.
Article in English | MEDLINE | ID: mdl-35076330

ABSTRACT

OBJECTIVE: This study investigated the effects of different adjustment criteria and sound scenes on self-adjusted hearing-aid gain settings. Self-adjusted settings were evaluated for speech recognition in noise, perceived listening effort, and preference. DESIGN: This study evaluated a semi-supervised self-adjustment fine-tuning procedure that presents realistic everyday sound scenes in a laboratory environment, using a two-dimensional user interface, and enabling simultaneous changes in amplitude and spectral slope. While exploring the two-dimensional space of parameter settings, the hearing-aid users were instructed to optimise either listening comfort or speech understanding. STUDY SAMPLE: Twenty experienced hearing aid users (median age 69.5 years) were invited to participate in this study. RESULTS: Adjustment criterion and sound scenes had a significant effect on preferred gain settings. No differences in signal-to-noise ratios required for 50% speech intelligibility or in the perceived listening effort were observed between the adjusted settings of the two adjustment criteria. There was a preference for the self-adjusted settings over the prescriptive first fit. CONCLUSIONS: Listeners could reliably select their preferred gains to the two adjustment criteria and for different speech stimuli.


Subject(s)
Hearing Aids , Hearing Loss, Sensorineural , Speech Perception , Humans , Aged , Hearing Loss, Sensorineural/rehabilitation , Auditory Perception , Noise/adverse effects
5.
Int J Audiol ; 62(6): 552-561, 2023 06.
Article in English | MEDLINE | ID: mdl-35722856

ABSTRACT

OBJECTIVE: The International Classification of Functioning Disability and Health (ICF) is a classification of health and health-related domains created by the World Health Organization and can be used as a standard to evaluate the health and disability of individuals. The ICF Core Set for Hearing Loss (CSHL) refers to the ICF categories found to be relative to Hearing Loss (HL) and the consequences of it on daily life. This study aimed to adapt the content of a database gathered in Hörzentrum Oldenburg gGmbH that included HL medical assessments and audiological data to the ICF. DESIGN: ICF linking rules were applied to these assessment methods including medical interviews, ear examinations, pure-tone audiometry, Adaptive Categorical Loudness Scaling, and speech intelligibility test. STUDY SAMPLE: 1316 subjects. RESULTS: In total, 44% of the brief and 18% of the comprehensive CSHL categories were addressed. The hearing functions were broadly evaluated. "Activities and Participation" and "Environmental Factors" were poorly examined (17% and 12% of the comprehensive CSHL categories, respectively). CONCLUSIONS: The HL correlation with day-to-day activities limitation, performance restriction, and environmental conditions were poorly addressed. This study showed the essence of incorporating these methodologies with approaches that assess the daily-life challenges caused by HL in rehabilitation.


Subject(s)
Audiology , Deafness , Hearing Loss , Humans , International Classification of Functioning, Disability and Health , Hearing Loss/diagnosis , Hearing , Activities of Daily Living , Disability Evaluation
6.
Trends Hear ; 26: 23312165221143901, 2022.
Article in English | MEDLINE | ID: mdl-36537084

ABSTRACT

Speech recognition in rooms requires the temporal integration of reflections which arrive with a certain delay after the direct sound. It is commonly assumed that there is a certain temporal window of about 50-100 ms, during which reflections can be integrated with the direct sound, while later reflections are detrimental to speech intelligibility. This concept was challenged in a recent study by employing binaural room impulse responses (RIRs) with systematically varied interaural phase differences (IPDs) and amplitude of the direct sound and a variable number of reflections delayed by up to 200 ms. When amplitude or IPD favored late RIR components, normal-hearing (NH) listeners appeared to be capable of focusing on these components rather than on the precedent direct sound, which contrasted with the common concept of considering early RIR components as useful and late components as detrimental. The present study investigated speech intelligibility in the same conditions in hearing-impaired (HI) listeners. The data indicate that HI listeners were generally less able to "ignore" the direct sound than NH listeners, when the most useful information was confined to late RIR components. Some HI listeners showed a remarkable inability to integrate across multiple reflections and to optimally "shift" their temporal integration window, which was quite dissimilar to NH listeners. This effect was most pronounced in conditions requiring spatial and temporal integration and could provide new challenges for individual prediction models of binaural speech intelligibility.


Subject(s)
Hearing Loss , Speech Perception , Humans , Auditory Threshold/physiology , Speech Perception/physiology , Speech Intelligibility , Hearing/physiology
7.
Int J Audiol ; : 1-13, 2022 Nov 28.
Article in English | MEDLINE | ID: mdl-36441177

ABSTRACT

OBJECTIVE: To develop the Cantonese matrix (YUEmatrix) test according to the international standard procedure and examine possible different outcomes in another tonal language. DESIGN: A 50-word Cantonese base-matrix was established. Word-specific speech recognition functions, speech recognition thresholds (SRT), and slopes were obtained. The speech material was homogenised in intelligibility by applying level corrections up to ± 3 dB. Subsequently, the YUEmatrix test was evaluated in five aspects: training effect, test-list equivalence, test-retest reliability, establishment of reference data for normal-hearing Cantonese-speakers, and comparison with the Cantonese-Hearing-In-Noise-Test. STUDY SAMPLE: Overall, 64 normal-hearing native Cantonese-speaking listeners. RESULTS: SRT measurements with adaptive procedures resulted in a reference SRT of -9.7 ± 0.7 dB SNR for open-set and -11.1 ± 1.2 dB SNR for the closed-set response format. Fixed SNR measurements suggested a test-specific speech intelligibility function slope of 15.5 ± 0.7%/dB. Seventeen 10-sentences base test lists were confirmed to be equivalent with respect to speech intelligibility. Training effect was not observed after two measurements of 20-sentences lists. CONCLUSIONS: The YUEmatrix yields comparable results to matrix tests in other languages including Mandarin. Level adjustments to homogenise sentences appear to be less effective for tonal languages than for most other languages developed so far.

8.
Hear Res ; 426: 108609, 2022 12.
Article in English | MEDLINE | ID: mdl-36209657

ABSTRACT

Plomp introduced an empirical separation of the increased speech recognition thresholds (SRT) in listeners with a sensorineural hearing loss into an Attenuation (A) component (which can be compensated by amplification) and a non-compensable Distortion (D) component. Previous own research backed up this notion by speech recognition models that derive their SRT prediction from the individual audiogram with or without a psychoacoustic measure of suprathreshold processing deficits. To determine the precision in separating the A and D component for the individual listener with various individual measures and individualized models, SRTs with 40 listeners with a variation in hearing impairment were obtained in quiet, stationary noise, and fluctuating noise (ICRA 5-250 and babble). Both the clinical audiogram and an adaptive, precise sweep audiogram were obtained as well as tone-in-noise detection thresholds at four frequencies to characterize the individual hearing impairment. For predicting the SRT, the FADE-model (which is based on machine learning) was used with either of the two audiogram procedures and optionally the individual tone-in-noise detection thresholds. The results indicate that the precisely measured swept tone audiogram allows for a more precise prediction of the individual SRT in comparison to the clinical audiogram (RMS error of 4.3 dB vs. 6.4 dB, respectively). While an estimation from the precise audiogram and FADE performed equally well in predicting the individual A and D component, the further refinement of including the tone-in-noise detection threshold with FADE led to a slight improvement of prediction accuracy (RMS error of 3.3 dB, 4.6 dB and 1.4 dB, for SRT, A and D component, respectively). Hence, applying FADE is advantageous for scientific purposes where a consistent modeling of different psychoacoustical effects in the same listener with a minimum amount of assumptions is desirable. For clinical purposes, however, a precisely measured audiogram and an estimation of the expected D component using a linear regression appears to be a satisfactory first step towards precision audiology.


Subject(s)
Hearing Loss , Speech Perception , Humans , Psychoacoustics , Auditory Threshold , Noise/adverse effects
9.
Front Neurol ; 13: 959582, 2022.
Article in English | MEDLINE | ID: mdl-36188360

ABSTRACT

For characterizing the complexity of hearing deficits, it is important to consider different aspects of auditory functioning in addition to the audiogram. For this purpose, extensive test batteries have been developed aiming to cover all relevant aspects as defined by experts or model assumptions. However, as the assessment time of physicians is limited, such test batteries are often not used in clinical practice. Instead, fewer measures are used, which vary across clinics. This study aimed at proposing a flexible data-driven approach for characterizing distinct patient groups (patient stratification into auditory profiles) based on one prototypical database (N = 595) containing audiogram data, loudness scaling, speech tests, and anamnesis questions. To further maintain the applicability of the auditory profiles in clinical routine, we built random forest classification models based on a reduced set of audiological measures which are often available in clinics. Different parameterizations regarding binarization strategy, cross-validation procedure, and evaluation metric were compared to determine the optimum classification model. Our data-driven approach, involving model-based clustering, resulted in a set of 13 patient groups, which serve as auditory profiles. The 13 auditory profiles separate patients within certain ranges across audiological measures and are audiologically plausible. Both a normal hearing profile and profiles with varying extents of hearing impairments are defined. Further, a random forest classification model with a combination of a one-vs.-all and one-vs.-one binarization strategy, 10-fold cross-validation, and the kappa evaluation metric was determined as the optimal model. With the selected model, patients can be classified into 12 of the 13 auditory profiles with adequate precision (mean across profiles = 0.9) and sensitivity (mean across profiles = 0.84). The proposed approach, consequently, allows generating of audiologically plausible and interpretable, data-driven clinical auditory profiles, providing an efficient way of characterizing hearing deficits, while maintaining clinical applicability. The method should by design be applicable to all audiological data sets from clinics or research, and in addition be flexible to summarize information across databases by means of profiles, as well as to expand the approach toward aided measurements, fitting parameters, and further information from databases.

10.
Front Neurosci ; 16: 931748, 2022.
Article in English | MEDLINE | ID: mdl-36071716

ABSTRACT

Recent studies on loudness perception of binaural broadband signals in hearing impaired listeners found large individual differences, suggesting the use of such signals in hearing aid fitting. Likewise, clinical cochlear implant (CI) fitting with narrowband/single-electrode signals might cause suboptimal loudness perception in bilateral and bimodal CI listeners. Here spectral and binaural loudness summation in normal hearing (NH) listeners, bilateral CI (biCI) users, and unilateral CI (uCI) users with normal hearing in the unaided ear was investigated to assess the relevance of binaural/bilateral fitting in CI users. To compare the three groups, categorical loudness scaling was performed for an equal categorical loudness noise (ECLN) consisting of the sum of six spectrally separated third-octave noises at equal loudness. The acoustical ECLN procedure was adapted to an equivalent procedure in the electrical domain using direct stimulation. To ensure the same broadband loudness in binaural measurements with simultaneous electrical and acoustical stimulation, a modified binaural ECLN was introduced and cross validated with self-adjusted loudness in a loudness balancing experiment. Results showed a higher (spectral) loudness summation of the six equally loud narrowband signals in the ECLN in CI compared to NH. Binaural loudness summation was found for all three listener groups (NH, uCI, and biCI). No increased binaural loudness summation could be found for the current uCI and biCI listeners compared to the NH group. In uCI loudness balancing between narrowband signals and single electrodes did not automatically result in a balanced loudness perception across ears, emphasizing the importance of binaural/bilateral fitting.

11.
Front Neurol ; 13: 960012, 2022.
Article in English | MEDLINE | ID: mdl-36081868

ABSTRACT

For supporting clinical decision-making in audiology, Common Audiological Functional Parameters (CAFPAs) were suggested as an interpretable intermediate representation of audiological information taken from various diagnostic sources within a clinical decision-support system (CDSS). Ten different CAFPAs were proposed to represent specific functional aspects of the human auditory system, namely hearing threshold, supra-threshold deficits, binaural hearing, neural processing, cognitive abilities, and a socio-economic component. CAFPAs were established as a viable basis for deriving audiological findings and treatment recommendations, and it has been demonstrated that model-predicted CAFPAs, with machine learning models trained on expert-labeled patient cases, are sufficiently accurate to be included in a CDSS, but it requires further validation by experts. The present study aimed to validate model-predicted CAFPAs based on previously unlabeled cases from the same data set. Here, we ask to which extent domain experts agree with the model-predicted CAFPAs and whether potential disagreement can be understood in terms of patient characteristics. To these aims, an expert survey was designed and applied to two highly-experienced audiology specialists. They were asked to evaluate model-predicted CAFPAs and estimate audiological findings of the given audiological information about the patients that they were presented with simultaneously. The results revealed strong relative agreement between the two experts and importantly between experts and the prediction for all CAFPAs, except for the neural processing and binaural hearing-related ones. It turned out, however, that experts tend to score CAFPAs in a larger value range, but, on average, across patients with smaller scores as compared with the machine learning models. For the hearing threshold-associated CAFPA in frequencies smaller than 0.75 kHz and the cognitive CAFPA, not only the relative agreement but also the absolute agreement between machine and experts was very high. For those CAFPAs with an average difference between the model- and expert-estimated values, patient characteristics were predictive of the disagreement. The findings are discussed in terms of how they can help toward further improvement of model-predicted CAFPAs to be incorporated in a CDSS for audiology.

12.
Hear Res ; 420: 108507, 2022 07.
Article in English | MEDLINE | ID: mdl-35484022

ABSTRACT

Spatial noise reduction algorithms ("beamformers") can considerably improve speech reception thresholds (SRTs) for bimodal cochlear implant (CI) users. The goal of this study was to model SRTs and SRT-benefit due to beamformers for bimodal CI users. Two existing model approaches varying in computational complexity and binaural processing assumption were compared: (i) the framework of auditory discrimination experiments (FADE) and (ii) the binaural speech intelligibility model (BSIM), both with CI and aided hearing-impaired front-ends. The exact same acoustic scenarios, and open-access beamformers as in the comparison clinical study Zedan et al. (2021) were used to quantify goodness of prediction. FADE was capable of modeling SRTs ab-initio, i.e., no calibration of the model was necessary to achieve high correlations and low root-mean square errors (RMSE) to both, measured SRTs (r = 0.85, RMSE = 2.8 dB) and to measured SRT-benefits (r = 0.96). BSIM achieved somewhat poorer predictions to both, measured SRTs (r = 0.78, RMSE = 6.7 dB) and to measured SRT-benefits (r = 0.91) and needs to be calibrated for matching average SRTs in one condition. Greatest deviations in predictions of BSIM were observed in diffuse multi-talker babble noise, which were not found with FADE. SRT-benefit predictions of both models were similar to instrumental signal-to-noise ratio (iSNR) improvements due to the beamformers. This indicates that FADE is preferrable for modeling absolute SRTs. However, for prediction of SRT-benefit due to spatial noise reduction algorithms in bimodal CI users, the average iSNR is a much simpler approach with similar performance.


Subject(s)
Cochlear Implantation , Cochlear Implants , Speech Perception , Algorithms , Hearing , Speech Intelligibility
13.
J Acoust Soc Am ; 151(3): 1417, 2022 03.
Article in English | MEDLINE | ID: mdl-35364918

ABSTRACT

Automatic speech recognition (ASR) has made major progress based on deep machine learning, which motivated the use of deep neural networks (DNNs) as perception models and specifically to predict human speech recognition (HSR). This study investigates if a modeling approach based on a DNN that serves as phoneme classifier [Spille, Ewert, Kollmeier, and Meyer (2018). Comput. Speech Lang. 48, 51-66] can predict HSR for subjects with different degrees of hearing loss when listening to speech embedded in different complex noises. The eight noise signals range from simple stationary noise to a single competing talker and are added to matrix sentences, which are presented to 20 hearing-impaired (HI) listeners (categorized into three groups with different types of age-related hearing loss) to measure their speech recognition threshold (SRT), i.e., the signal-to-noise ratio with 50% word recognition rate. These are compared to responses obtained from the ASR-based model using degraded feature representations that take into account the individual hearing loss of the participants captured by a pure-tone audiogram. Additionally, SRTs obtained from eight normal-hearing (NH) listeners are analyzed. For NH subjects and three groups of HI listeners, the average SRT prediction error is below 2 dB, which is lower than the errors of the baseline models.


Subject(s)
Deep Learning , Presbycusis , Speech Perception , Hearing/physiology , Humans , Speech , Speech Perception/physiology
14.
Int J Audiol ; 61(11): 965-974, 2022 Nov.
Article in English | MEDLINE | ID: mdl-34612124

ABSTRACT

OBJECTIVE: This study investigated if individual preferences with respect to the trade-off between a good signal-to-noise ratio and a distortion-free speech target were stable across different masking conditions and if simple adjustment methods could be used to identify subjects as either "noise haters" or "distortions haters". DESIGN: In each masking condition, subjects could adjust the target speech level according to their preferences by employing (i) linear gain or gain at the cost of (ii) clipping distortions or (iii) compression distortions. The comparison of these processing conditions allowed investigating the preferred trade-off between distortions and noise disturbance. STUDY SAMPLE: Thirty subjects differing widely in hearing status (normal-hearing to moderately impaired) and age (23-85 years). RESULTS: High test-retest stability of individual preferences was found for all modification schemes. The preference adjustments suggested that subjects could be consistently categorised along a scale from "noise haters" to "distortion haters", and this preference trait remained stable through all maskers, spatial conditions, and types of distortions. CONCLUSIONS: Employing quick self-adjustment to collect listening preferences in complex listening conditions revealed a stable preference trait along the "noise vs. distortions" tolerance dimension. This could potentially help in fitting modern hearing aid algorithms to the individual user.


Subject(s)
Hearing Aids , Hearing Loss, Sensorineural , Speech Perception , Humans , Young Adult , Adult , Middle Aged , Aged , Aged, 80 and over , Hearing , Hearing Tests , Noise/adverse effects , Hearing Loss, Sensorineural/rehabilitation
15.
Int J Audiol ; 61(3): 205-219, 2022 Mar.
Article in English | MEDLINE | ID: mdl-34081564

ABSTRACT

OBJECTIVE: A model-based determination of the average supra-threshold ("distortion") component of hearing impairment which limits the benefit of hearing aid amplification. DESIGN: Published speech recognition thresholds (SRTs) were predicted with the framework for auditory discrimination experiments (FADE), which simulates recognition processes, the speech intelligibility index (SII), which exploits frequency-dependent signal-to-noise ratios (SNR), and a modified SII with a hearing-loss-dependent band importance function (PAV). Their attenuation-component-based prediction errors were interpreted as estimates of the distortion component. STUDY SAMPLE: Unaided SRTs of 315 hearing-impaired ears measured with the German matrix sentence test in stationary noise. RESULTS: Overall, the models showed root-mean-square errors (RMSEs) of 7 dB, but for steeply sloping hearing loss FADE and PAV were more accurate (RMSE = 9 dB) than the SII (RMSE = 23 dB). Prediction errors of FADE and PAV increased linearly with the average hearing loss. The consideration of the distortion component estimate significantly improved the accuracy of FADE's and PAV's predictions. CONCLUSIONS: The supra-threshold distortion component-estimated by prediction errors of FADE and PAV-seems to increase with the average hearing loss. Accounting for a distortion component improves the model predictions and implies a need for effective compensation strategies for supra-threshold processing deficits with increasing audibility loss.


Subject(s)
Hearing Aids , Hearing Loss, Sensorineural , Hearing Loss , Speech Perception , Auditory Threshold , Hearing Loss/diagnosis , Humans , Speech Intelligibility
17.
Front Psychol ; 12: 634943, 2021.
Article in English | MEDLINE | ID: mdl-34239474

ABSTRACT

The individual loudness perception of a patient plays an important role in hearing aid satisfaction and use in daily life. Hearing aid fitting and development might benefit from individualized loudness models (ILMs), enabling better adaptation of the processing to individual needs. The central question is whether additional parameters are required for ILMs beyond non-linear cochlear gain loss and linear attenuation common to existing loudness models for the hearing impaired (HI). Here, loudness perception in eight normal hearing (NH) and eight HI listeners was measured in conditions ranging from monaural narrowband to binaural broadband, to systematically assess spectral and binaural loudness summation and their interdependence. A binaural summation stage was devised with empirical monaural loudness judgments serving as input. While NH showed binaural inhibition in line with the literature, binaural summation and its inter-subject variability were increased in HI, indicating the necessity for individualized binaural summation. Toward ILMs, a recent monaural loudness model was extended with the suggested binaural stage, and the number and type of additional parameters required to describe and to predict individual loudness were assessed. In addition to one parameter for the individual amount of binaural summation, a bandwidth-dependent monaural parameter was required to successfully account for individual spectral summation.

18.
Front Psychol ; 12: 623670, 2021.
Article in English | MEDLINE | ID: mdl-33841255

ABSTRACT

Generations of researchers observed a mismatch between headphone and loudspeaker presentation: the sound pressure level at the eardrum generated by a headphone has to be about 6 dB higher compared to the level created by a loudspeaker that elicits the same loudness. While it has been shown that this effect vanishes if the same waveforms are generated at the eardrum in a blind comparison, the origin of the mismatch is still unclear. We present new data on the issue that systematically characterize this mismatch under variation of the stimulus frequency, presentation room, and binaural parameters of the headphone presentation. Subjects adjusted the playback level of a headphone presentation to equal loudness as loudspeaker presentation, and the levels at the eardrum were determined through appropriate transfer function measurements. Identical experiments were conducted at Oldenburg and Aachen with 40 normal-hearing subjects including 14 that passed through both sites. Our data verify a mismatch between loudspeaker and binaural headphone presentation, especially at low frequencies. This mismatch depends on the room acoustics, and on the interaural coherence in both presentation modes. It vanishes for high frequencies and broadband signals if individual differences in the sound transfer to the eardrums are accounted for. Moreover, small acoustic and non-acoustic differences in an anechoic reference environment (Oldenburg vs. Aachen) exert a large effect on the recorded loudness mismatch, whereas not such a large effect of the respective room is observed across moderately reverberant rooms at both sites. Hence, the non-conclusive findings from the literature appear to be related to the experienced disparity between headphone and loudspeaker presentation, where even small differences in (anechoic) room acoustics significantly change the response behavior of the subjects. Moreover, individual factors like loudness summation appear to be only loosely connected to the observed mismatch, i.e., no direct prediction is possible from individual binaural loudness summation to the observed mismatch. These findings - even though not completely explainable by the yet limited amount of parameter variations performed in this study - have consequences for the comparability of experiments using loudspeakers with conditions employing headphones or other ear-level hearing devices.

19.
Trends Hear ; 25: 23312165211005931, 2021.
Article in English | MEDLINE | ID: mdl-33926327

ABSTRACT

This study investigated the speech intelligibility benefit of using two different spatial noise reduction algorithms in cochlear implant (CI) users who use a hearing aid (HA) on the contralateral side (bimodal CI users). The study controlled for head movements by using head-related impulse responses to simulate a realistic cafeteria scenario and controlled for HA and CI manufacturer differences by using the master hearing aid platform (MHA) to apply both hearing loss compensation and the noise reduction algorithms (beamformers). Ten bimodal CI users with moderate to severe hearing loss contralateral to their CI participated in the study, and data from nine listeners were included in the data analysis. The beamformers evaluated were the adaptive differential microphones (ADM) implemented independently on each side of the listener and the (binaurally implemented) minimum variance distortionless response (MVDR). For frontal speech and stationary noise from either left or right, an improvement (reduction) of the speech reception threshold of 5.4 dB and 5.5 dB was observed using the ADM, and 6.4 dB and 7.0 dB using the MVDR, respectively. As expected, no improvement was observed for either algorithm for colocated speech and noise. In a 20-talker babble noise scenario, the benefit observed was 3.5 dB for ADM and 7.5 dB for MVDR. The binaural MVDR algorithm outperformed the bilaterally applied monaural ADM. These results encourage the use of beamformer algorithms such as the ADM and MVDR by bimodal CI users in everyday life scenarios.


Subject(s)
Cochlear Implantation , Cochlear Implants , Speech Perception , Algorithms , Auditory Threshold , Humans , Speech Intelligibility
20.
Audiol Res ; 11(1): 73-88, 2021 Feb 25.
Article in English | MEDLINE | ID: mdl-33668761

ABSTRACT

This study aimed at the evaluation of a simplified Italian matrix test (SiIMax) for speech-recognition measurements in noise for adults and children. Speech-recognition measurements with adults and children were conducted to examine the training effect and to establish reference speech-recognition thresholds of 50% (SRT50) and 80% (SRT80) correct responses. Test-list equivalency was evaluated only with adults. Twenty adults and 96 children-aged between 5 and 10 years-participated. Evaluation measurements with the adults confirmed the equivalence of the test lists, with a mean SRT50 of -8.0 dB and a standard deviation of 0.2 dB across the test lists. The test-specific slope (the average of the list-specific slopes) was 11.3%/dB, with a standard deviation of 0.6%/dB. For both adults and children, only one test list of 14 phrases needs to be presented to account for the training effect. For the adults, adaptive measurements of the SRT50 and SRT80 showed mean values of -7.0 ± 0.6 and -4.5 ± 1.1 dB, respectively. For children, a slight influence of age on the SRT was observed. The mean SRT50s were -5.6 ± 1.2, -5.8 ± 1.2 and -6.6 ± 1.3 dB for the children aged 5-6, 7-8 and 9-10 years, respectively. The corresponding SRT80s were -1.5 ± 2.7, -3.0 ± 1.7 and -3.7 ± 1.4 dB. High test-retest reliabilities of 1.0 and 1.1 dB for the SRT80 were obtained for the adults and children, respectively. This makes the test suitable for accurate and reliable speech-recognition measurements.

SELECTION OF CITATIONS
SEARCH DETAIL
...