Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 26
Filter
1.
J Acoust Soc Am ; 155(4): 2849-2859, 2024 Apr 01.
Article in English | MEDLINE | ID: mdl-38682914

ABSTRACT

The context-based Extended Speech Transmission Index (cESTI) (van Schoonhoven et al., 2022, J. Acoust. Soc. Am. 151, 1404-1415) was successfully applied to predict the intelligibility of monosyllabic words with different degrees of context in interrupted noise. The current study aimed to use the same model for the prediction of sentence intelligibility in different types of non-stationary noise. The necessary context factors and transfer functions were based on values found in existing literature. The cESTI performed similar to or better than the original ESTI when noise had speech-like characteristics. We hypothesize that the remaining inaccuracies in model predictions can be attributed to the limits of the modelling approach with regard to mechanisms, such as modulation masking and informational masking.


Subject(s)
Noise , Perceptual Masking , Speech Intelligibility , Speech Perception , Humans , Perceptual Masking/physiology , Female , Speech Perception/physiology , Male , Adult , Young Adult , Speech Acoustics , Models, Theoretical , Acoustic Stimulation
2.
Int J Audiol ; : 1-9, 2024 Mar 03.
Article in English | MEDLINE | ID: mdl-38432678

ABSTRACT

OBJECTIVE: Modelling the head-shadow effect compensation and speech recognition outcomes, we aimed to study the benefits of a bone conduction device (BCD) during the headband trial for single-sided deafened (SSD) subjects. DESIGN: This study is based on a database of individual patient measurements, fitting parameters, and acoustic BCD properties retrospectively measured on a skull simulator or from existing literature. The sensation levels of the Bone-Conduction and Air-Conduction sound paths were compared, modelling three spatial conditions with speech in quiet. We calculated the phoneme score using the Speech Intelligibility Index for the three conditions in quiet and seven in noise. STUDY SAMPLE: Eighty-five SSD adults fitted with BCD during headband trial. RESULTS: According to our model, most subjects did not achieve a full head-shadow effect compensation with the signal at the BCD side and in front. The modelled speech recognition in the quiet conditions did not improve with the BCD on the headband. In noise, we found a slight improvement in some specific conditions and minimal worsening in others. CONCLUSIONS: Based on an audibility model, this study challenges the fundamentals of a BCD headband trial in SSD subjects. Patients should be counselled regarding the potential outcome and alternative approaches.

3.
Diagn Progn Res ; 8(1): 1, 2024 Jan 23.
Article in English | MEDLINE | ID: mdl-38263270

ABSTRACT

BACKGROUND: Speech perception tests are essential to measure the functional use of hearing and to determine the effectiveness of hearing aids and implantable auditory devices. However, these language-based tests require active participation and are influenced by linguistic and neurocognitive skills limiting their use in patients with insufficient language proficiency, cognitive impairment, or in children. We recently developed a non-attentive and objective speech perception prediction model: the Acoustic Change Complex (ACC) prediction model. The ACC prediction model uses electroencephalography to measure alterations in cortical auditory activity caused by frequency changes. The aim is to validate this model in a large-scale external validation study in adult patients with varying degrees of sensorineural hearing loss (SNHL) to confirm the high predictive value of the ACC model and to assess its test-retest reliability. METHODS: A total of 80 participants, aged 18-65 years, will be enrolled in the study. The categories of severity of hearing loss will be used as a blocking factor to establish an equal distribution of patients with various degrees of sensorineural hearing loss. During the first visit, pure tone audiometry, speech in noise tests, a phoneme discrimination test, and the first ACC measurement will be performed. During the second visit (after 1-4 weeks), the same ACC measurement will be performed to assess the test-retest reliability. The acoustic change stimuli for ACC measurements consist of a reference tone with a base frequency of 1000, 2000, or 4000 Hz with a duration of 3000 ms, gliding to a 300-ms target tone with a frequency that is 12% higher than the base frequency. The primary outcome measures are (1) the level of agreement between the predicted speech reception threshold (SRT) and the behavioral SRT, and (2) the level of agreement between the SRT calculated by the first ACC measurement and the SRT of the second ACC measurement. Level of agreement will be assessed with Bland-Altman plots. DISCUSSION: Previous studies by our group have shown the high predictive value of the ACC model. The successful validation of this model as an effective and reliable biomarker of speech perception will directly benefit the general population, as it will increase the accuracy of hearing evaluations and improve access to adequate hearing rehabilitation.

4.
J Acoust Soc Am ; 154(4): 2476-2488, 2023 10 01.
Article in English | MEDLINE | ID: mdl-37862572

ABSTRACT

The context-based Extended Speech Transmission Index (cESTI) by Van Schoonhoven et al. (2022) was successfully used to predict the intelligibility of meaningful, monosyllabic words in interrupted noise. However, it is not clear how the model behaves when using different degrees of context. In the current paper, intelligibility of meaningful and nonsense CVC words in stationary and interrupted noise was measured in fourteen normally hearing adults. Intelligibility of nonsense words in interrupted noise at -18 dB SNR was relatively poor, possibly because listeners did not profit from coarticulatory cues as they did in stationary noise. With 75% of the total variance explained, the cESTI model performed better than the original ESTI model (R2 = 27%), especially due to better predictions at low interruption rates. However, predictions for meaningful word scores were relatively poor (R2 = 38%), mainly due to remaining inaccuracies at interruption rates below 4 Hz and a large effect of forward masking. Adjusting parameters of the forward masking function improved the accuracy of the model to a total explained variance of 83%, while the predicted power of previously published cESTI data remained similar.


Subject(s)
Hearing Loss, Sensorineural , Speech Perception , Adult , Humans , Perceptual Masking , Noise/adverse effects , Hearing , Speech , Speech Intelligibility
5.
Int J Audiol ; : 1-8, 2023 May 11.
Article in English | MEDLINE | ID: mdl-37167528

ABSTRACT

OBJECTIVE: In standards IEC 60645-1 and ANSI S3.6, the free-field equivalent earphone output level method is assumed as the reference for speech audiometry. Three calibration procedures for this method were compared in this study. DESIGN: Speech audiometry was conducted with Dutch consonant-vowel-consonant words for the following conditions: 1. TDH39 earphones, 2. loudspeaker, and 3. free-field simulated with TDH39 earphones. The first calibration procedure was based on the empirically determined difference between the speech recognition threshold (SRT) with earphones and a loudspeaker. The second procedure was based on the theoretical free-field correction, derived from the known speech spectrum and the free-field to coupler difference. The third calibration procedure corresponded to the results of the free-field simulated speech material under earphones. STUDY SAMPLE: The sample included 20 normal hearing subjects. RESULTS: The differences between the observed SRT in the free-field and earphone conditions and the free-field and simulated free-field conditions were 7.1 dB and 0.6 dB, respectively. CONCLUSION: The three calibration procedures for the free-field equivalent output method yielded approximately the same results, and therefore all appear to be useful for TDH39 earphones.

6.
J Speech Lang Hear Res ; 66(4): 1274-1279, 2023 04 12.
Article in English | MEDLINE | ID: mdl-36881855

ABSTRACT

PURPOSE: The purpose of this study was to examine the differences between the age of acquisition (AoA) and sentence length of sentences of the speech recognition (SR) tests for adults and children in Dutch, American English, and Canadian French. METHOD: The AoA and sentence length of the sentences of four SR tests for adults and children were determined. One-way analyses of variance were performed to assess differences between the tests. RESULTS: The AoA and sentence length of the sentences significantly differed between the SR tests for adults. These differences were also found between the SR tests for children. CONCLUSIONS: The AoA and the sentence length differ across the SR tests in Dutch, American English, and Canadian French. The Dutch sentences have higher AoA and are longer than the sentences in American English and Canadian French. The effect of the linguistic complexity on sentence repetition accuracy should be investigated during the development and validation of a Dutch SR test for children.


Subject(s)
Speech Perception , Adult , Child , Humans , Canada , Language , Speech , Linguistics
7.
Int J Audiol ; 62(2): 182-191, 2023 02.
Article in English | MEDLINE | ID: mdl-35195500

ABSTRACT

OBJECTIVE: To monitor ototoxicity, air conduction (AC) extended high frequency (EHF) thresholds can be measured up to 16 kHz. However, conductive hearing loss might influence these results. This is unfortunate because the EHF thresholds are important to follow the impact of ototoxic medication during therapy. Therefore a suitable bone conduction (BC) transducer and norm values for EHF BC measurements are needed. DESIGN: In this study three different BC transducers were used: the B71 (Radioear), the KH70 (Präcitronic), and the KLH96 (Westra). Hearing thresholds were measured from 0.125 to 16 kHz using AC transducers (Telephonics TDH39, Sennheiser HDA200), and BC thresholds from 0.25 to 8 kHz with the B71, and from 0.25 to 16 kHz with the KLH96 and KH70. STUDY SAMPLE: 60 ears of 30 normal hearing subjects were measured. RESULTS: The KLH96 showed the highest output for the high frequencies, and distortion measurements were similar to the KH70. The results show that EHF measurements are possible using the KLH96 and KH70 bone conductors. CONCLUSION: EHF BC measurements are reliable when using the KLH96 and KH70 bone conductors. The extended force sensitivity of the used artificial mastoid should be determined for a proper EHF BC calibration.


Subject(s)
Bone Conduction , Ototoxicity , Humans , Audiometry/methods , Auditory Threshold , Calibration , Acoustic Stimulation/methods , Audiometry, Pure-Tone , Transducers
8.
J Acoust Soc Am ; 151(2): 1404, 2022 02.
Article in English | MEDLINE | ID: mdl-35232064

ABSTRACT

The Extended Speech Transmission Index (ESTI) by van Schoonhoven et al. [(2019). J. Acoust. Soc. Am. 145, 1178-1194] was used successfully to predict intelligibility of sentences in fluctuating background noise. However, prediction accuracy was poor when the modulation frequency of the masker was low (<8 Hz). In the current paper, the ESTI was calculated per phoneme to estimate phoneme intelligibility. In the next step, the ESTI model was combined with one of two context models {Boothroyd and Nittrouer, [(1988). J. Acoust. Soc. Am. 84, 101-114]; Bronkhorst et al., [(1993). J. Acoust. Soc. Am. 93, 499-509} in order to improve model predictions. This approach was validated using interrupted speech data, after which it was used to predict speech intelligibility of words in interrupted noise. Model predictions improved using this new method, especially for maskers with interruption rates below 5 Hz. Calculating the ESTI at phoneme level combined with a context model is therefore a viable option to improve prediction accuracy.


Subject(s)
Speech Intelligibility , Speech Perception , Cognition , Noise/adverse effects , Perceptual Masking
9.
BMJ Open ; 11(5): e043288, 2021 05 18.
Article in English | MEDLINE | ID: mdl-34006544

ABSTRACT

INTRODUCTION: Tinnitus is the perception of sound without an external stimulus, often experienced as a ringing or buzzing sound. Subjective tinnitus is assumed to origin from changes in neural activity caused by reduced or lack of auditory input, for instance due to hearing loss. Since auditory deprivation is thought to be one of the causes of tinnitus, increasing the auditory input by cochlear implantation might be a possible treatment. In studies assessing cochlear implantation for patients with hearing loss, tinnitus relief was seen as a secondary outcome. Therefore, we will assess the effect of cochlear implantation in patients with primarily tinnitus complaints. METHOD AND ANALYSIS: In this randomised controlled trial starting in January 2021 at the ENT department of the UMC Utrecht (the Netherlands), patients with a primary complaint of tinnitus will be included. Fifty patients (Tinnitus Functional Index (TFI) >32, Beck's Depression Index <19, pure tone average at 0.5, 1, 2 and 4 kHz: bilateral threshold between 50 and ≤75 dB) will be randomised towards cochlear implantation or no intervention. Primary outcome of the study is tinnitus burden as measured by the TFI. Outcomes of interest are tinnitus severity, hearing performances (tinnitus pitch and loudness, speech perception), quality of life, depression and patient-related changes. Outcomes will be evaluated prior to implantation and at 3 and 6 months after the surgery. The control group will receive questionnaires at 3 and 6 months after randomisation. We expect a significant difference between the cochlear implant recipients and the control group for tinnitus burden. ETHICS AND DISSEMINATION: This research protocol was approved by the Institutional Review Board of the University Medical Center (UMC) Utrecht (NL70319.041.19, V5.0, January 2021). The trial results will be made accessible to the public in a peer-review journal. TRIAL REGISTRATION NUMBER: Trial registration number NL8693; Pre-results.


Subject(s)
Cochlear Implantation , Tinnitus , Adult , Hearing Loss, Bilateral , Humans , Netherlands , Quality of Life , Randomized Controlled Trials as Topic , Tinnitus/surgery , Treatment Outcome
10.
Ear Hear ; 41(6): 1511-1517, 2020.
Article in English | MEDLINE | ID: mdl-33136627

ABSTRACT

OBJECTIVES: Speech recognition (SR)-tests have been developed for children without considering the linguistic complexity of the sentences used. However, linguistic complexity is hypothesized to influence correct sentence repetition. The aim of this study is to identify lexical and grammatical parameters influencing verbal repetition accuracy of sentences derived from a Dutch SR-test when performed by 6-year-old typically developing children. DESIGN: For this observational, cross-sectional study, 40 typically developing children aged 6 were recruited at four primary schools in the Netherlands. All children performed a sentence repetition task derived from an SR-test for adults. The sentence complexity was described beforehand with one lexical parameter, age of acquisition, and four grammatical parameters, specifically sentence length, prepositions, sentence structure, and verb inflection. A multiple logistic regression analysis was performed. RESULTS: Sentences with a higher age of acquisition (odds ratio [OR] = 1.59) or greater sentence length (OR = 1.28) had a higher risk of repetition inaccuracy. Sentences including a spatial (OR = 1.25) or other preposition (OR = 1.25) were at increased risk for incorrect repetition, as were complex sentences (OR = 1.69) and sentences in the present perfect (OR = 1.44) or future tense (OR = 2.32). CONCLUSIONS: The variation in verbal repetition accuracy in 6-year-old children is significantly influenced by both lexical and grammatical parameters. Linguistic complexity is an important factor to take into account when assessing speech intelligibility in children.


Subject(s)
Speech Perception , Adult , Child , Cross-Sectional Studies , Humans , Linguistics , Netherlands , Speech Intelligibility
11.
J Acoust Soc Am ; 145(3): 1178, 2019 03.
Article in English | MEDLINE | ID: mdl-31067918

ABSTRACT

The Speech Transmission Index (STI) is used to predict speech intelligibility in noise and reverberant environments. However, measurements and predictions in fluctuating noises lead to inaccuracies. In the current paper, the Extended Speech Transmission Index (ESTI) is presented in order to deal with these shortcomings. Speech intelligibility in normally hearing subjects was measured using stationary and fluctuating maskers. These results served to optimize model parameters. Data from the literature were then used to verify the ESTI-model. Model outcomes were accurate for stationary maskers, maskers with artificial fluctuations, and maskers with real life non-speech modulations. Maskers with speech-like characteristics introduced systematic errors in the model outcomes, probably due to a combination of modulation masking, context effects, and informational masking.

12.
Otol Neurotol ; 39(6): 707-714, 2018 07.
Article in English | MEDLINE | ID: mdl-29889780

ABSTRACT

HYPOTHESIS: A cochlear implant (CI) restores hearing in patients with profound sensorineural hearing loss by electrical stimulation of the auditory nerve. It is unknown how this electrical stimulation sounds. BACKGROUND: Patients with single-sided deafness (SSD) and a CI form a unique population, since they can compare the sound of their CI with simulations of the CI sound played to their nonimplanted ear. METHODS: We tested six stimuli (speech and music) in 10 SSD patients implanted with a CI (Cochlear Ltd). Patients listened to the original stimulus with their CI ear while their nonimplanted ear was masked. Subsequently, patients listened to two CI simulations, created with a vocoder, with their nonimplanted ear alone. They selected the CI simulation with greatest similarity to the sound as perceived by their CI ear and they graded similarity on a 1 to 10 scale. We tested three vocoders: two known from the literature, and one supplied by Cochlear Ltd. Two carriers (noise, sine) were tested for each vocoder. RESULTS: Carrier noise and the vocoders from the literature were most often selected as best match to the sound as perceived by the CI ear. However, variability in selections was substantial both between patients and within patients between sound samples. The average grade for similarity was 6.8 for speech stimuli and 6.3 for music stimuli. CONCLUSION: We obtained a fairly good impression of what a CI can sound like for SSD patients. This may help to better inform and educate patients and family members about the sound of a CI.


Subject(s)
Auditory Perception , Cochlear Implants , Hearing Loss, Sensorineural/surgery , Adult , Cochlear Implantation , Female , Hearing Loss, Unilateral/surgery , Humans , Male , Middle Aged
13.
Ear Hear ; 39(3): 436-448, 2018.
Article in English | MEDLINE | ID: mdl-29697497

ABSTRACT

OBJECTIVES: The objectives of this study were to (1) identify essential hearing-critical job tasks for public safety and law enforcement personnel; (2) determine the locations and real-world noise environments where these tasks are performed; (3) characterize each noise environment in terms of its impact on the likelihood of effective speech communication, considering the effects of different levels of vocal effort, communication distances, and repetition; and (4) use this characterization to define an objective normative reference for evaluating the ability of individuals to perform essential hearing-critical job tasks in noisy real-world environments. DESIGN: Data from five occupational hearing studies performed over a 17-year period for various public safety agencies were analyzed. In each study, job task analyses by job content experts identified essential hearing-critical tasks and the real-world noise environments where these tasks are performed. These environments were visited, and calibrated recordings of each noise environment were made. The extended speech intelligibility index (ESII) was calculated for each 4-sec interval in each recording. These data, together with the estimated ESII value required for effective speech communication by individuals with normal hearing, allowed the likelihood of effective speech communication in each noise environment for different levels of vocal effort and communication distances to be determined. These likelihoods provide an objective norm-referenced and standardized means of characterizing the predicted impact of real-world noise on the ability to perform essential hearing-critical tasks. RESULTS: A total of 16 noise environments for law enforcement personnel and eight noise environments for corrections personnel were analyzed. Effective speech communication was essential to hearing-critical tasks performed in these environments. Average noise levels, ranged from approximately 70 to 87 dBA in law enforcement environments and 64 to 80 dBA in corrections environments. The likelihood of effective speech communication at communication distances of 0.5 and 1 m was often less than 0.50 for normal vocal effort. Likelihood values often increased to 0.80 or more when raised or loud vocal effort was used. Effective speech communication at and beyond 5 m was often unlikely, regardless of vocal effort. CONCLUSIONS: ESII modeling of nonstationary real-world noise environments may prove an objective means of characterizing their impact on the likelihood of effective speech communication. The normative reference provided by these measures predicts the extent to which hearing impairments that increase the ESII value required for effective speech communication also decrease the likelihood of effective speech communication. These predictions may provide an objective evidence-based link between the essential hearing-critical job task requirements of public safety and law enforcement personnel and ESII-based hearing assessment of individuals who seek to perform these jobs.


Subject(s)
Hearing Tests/methods , Noise, Occupational , Speech Intelligibility , Evidence-Based Practice , Hearing , Humans , Models, Theoretical , Perceptual Masking , Police , Prisons , Speech Reception Threshold Test
14.
Int J Audiol ; 57(5): 323-334, 2018 05.
Article in English | MEDLINE | ID: mdl-29668374

ABSTRACT

OBJECTIVE: Validate use of the Extended Speech Intelligibility Index (ESII) for prediction of speech intelligibility in non-stationary real-world noise environments. Define a means of using these predictions for objective occupational hearing screening for hearing-critical public safety and law enforcement jobs. DESIGN: Analyses of predicted and measured speech intelligibility in recordings of real-world noise environments were performed in two studies using speech recognition thresholds (SRTs) and intelligibility measures. ESII analyses of the recordings were used to predict intelligibility. Noise recordings were made in prison environments and at US Army facilities for training ground and airborne forces. Speech materials included full bandwidth sentences and bandpass filtered sentences that simulated radio transmissions. STUDY SAMPLE: A total of 22 adults with normal hearing (NH) and 15 with mild-moderate hearing impairment (HI) participated in the two studies. RESULTS: Average intelligibility predictions for individual NH and HI subjects were accurate in both studies (r2 ≥ 0.94). Pooled predictions were slightly less accurate (0.78 ≤ r2 ≤ 0.92). CONCLUSIONS: An individual's SRT and audiogram can accurately predict the likelihood of effective speech communication in noise environments with known ESII characteristics, where essential hearing-critical tasks are performed. These predictions provide an objective means of occupational hearing screening.


Subject(s)
Hearing Loss/diagnosis , Speech Intelligibility , Speech Reception Threshold Test/standards , Adult , Case-Control Studies , Female , Hearing , Humans , Male , Middle Aged , Noise , Perceptual Masking , Predictive Value of Tests , Reproducibility of Results , Speech Reception Threshold Test/methods
15.
J Acoust Soc Am ; 141(2): 818, 2017 02.
Article in English | MEDLINE | ID: mdl-28253636

ABSTRACT

In the field of room acoustics, the modulation transfer function (MTF) can be used to predict speech intelligibility in stationary noise and reverberation and can be expressed in one single value: the Speech Transmission Index (STI). One drawback of the classical STI measurement method is that it is not validated for fluctuating background noise. As opposed to the classical measurement method, the MTF due to reverberation can also be calculated using an impulse response measurement. This indirect method presents an opportunity for STI measurements in fluctuating noise, and a first prerequisite is a reliable impulse response measurement. The conditions under which the impulse response can be measured with sufficient precision were investigated in the current study. Impulse response measurements were conducted using a sweep stimulus. Two experiments are discussed with variable absorption, different levels of stationary and fluctuating background noise, and different sweep levels. Additionally, simulations with different types of fluctuating noise were conducted in an attempt to extrapolate the experimental findings to other acoustical conditions. The experiments and simulations showed that a minimum impulse-to-noise ratio of +25 dB in fluctuating noise was needed.

16.
Ear Hear ; 38(2): 194-204, 2017.
Article in English | MEDLINE | ID: mdl-27749521

ABSTRACT

OBJECTIVES: The effects of nonlinear signal processing on speech intelligibility in noise are difficult to evaluate. Often, the effects are examined by comparing speech intelligibility scores with and without processing measured at fixed signal to noise ratios (SNRs) or by comparing the adaptive measured speech reception thresholds corresponding to 50% intelligibility (SRT50) with and without processing. These outcome measures might not be optimal. Measuring at fixed SNRs can be affected by ceiling or floor effects, because the range of relevant SNRs is not know in advance. The SRT50 is less time consuming, has a fixed performance level (i.e., 50% correct), but the SRT50 could give a limited view, because we hypothesize that the effect of most nonlinear signal processing algorithms at the SRT50 cannot be generalized to other points of the psychometric function. DESIGN: In this article, we tested the value of estimating the entire psychometric function. We studied the effect of wide dynamic range compression (WDRC) on speech intelligibility in stationary, and interrupted speech-shaped noise in normal-hearing subjects, using a fast method-based local linear fitting approach and by two adaptive procedures. RESULTS: The measured performance differences for conditions with and without WDRC for the psychometric functions in stationary noise and interrupted speech-shaped noise show that the effects of WDRC on speech intelligibility are SNR dependent. CONCLUSIONS: We conclude that favorable and unfavorable effects of WDRC on speech intelligibility can be missed if the results are presented in terms of SRT50 values only.


Subject(s)
Noise , Speech Perception , Adult , Female , Healthy Volunteers , Hearing Aids , Hearing Loss/rehabilitation , Humans , Male , Psychometrics , Signal Processing, Computer-Assisted , Signal-To-Noise Ratio , Speech Intelligibility , Speech Reception Threshold Test , Young Adult
17.
J Am Acad Audiol ; 26(6): 563-71, 2015 Jun.
Article in English | MEDLINE | ID: mdl-26134723

ABSTRACT

BACKGROUND: A temporal resolution test in addition to the pure-tone audiogram may be of great clinical interest because of its relevance in speech perception and expected relevance in hearing aid fitting. Larsby and Arlinger developed an appropriate clinical test, but this test uses a Békèsy-tracking procedure for estimating masked thresholds in stationary and interrupted noise to assess release of masking (RoM) for temporal resolution. Generally the Hughson-Westlake up-down procedure is used in the clinic to measure the pure-tone thresholds in quiet. A uniform approach will facilitate clinical application and might be appropriate for RoM measurements as well. Because there is no golden standard for measuring the RoM in the clinic, we examine in the present study the Hughson-Westlake up-down procedure to measure the RoM and compare the results with the Békèsy-tracking procedure. PURPOSE: The purpose of the current study was to examine the differences between a Békèsy-tracking procedure and the Hughson-Westlake up-down procedure for estimating masked thresholds in stationary and interrupted noise to assess RoM. RESEARCH DESIGN: RoM is assessed in eight normal-hearing (NH) and ten hearing-impaired (HI) listeners through both methods. Results from both methods are compared with each other and with predicted thresholds from a model. DATA ANALYSIS: Wilcoxon signed-rank tests, paired t tests. RESULTS: Some differences between the two methods were found. We used a model to quantify the results of the two measurement procedures. The results of the Hughson-Westlake procedure were clearly better in agreement with the model than the results of the Békèsy-tracking procedure. Furthermore, the Békèsy-tracking procedure showed more spread in the results of the NH listeners than the Hughson-Westlake procedure. CONCLUSIONS: The Hughson-Westlake procedure seems to be an applicable alternative for measuring RoM for temporal resolution in the clinical audiological practice.


Subject(s)
Hearing Loss/physiopathology , Perceptual Masking/physiology , Speech Perception/physiology , Adult , Aged , Audiometry , Auditory Threshold/physiology , Case-Control Studies , Hearing Aids , Hearing Loss/diagnosis , Hearing Loss/therapy , Humans , Middle Aged , Netherlands , Noise , Young Adult
18.
J Acoust Soc Am ; 135(3): 1491-505, 2014 Mar.
Article in English | MEDLINE | ID: mdl-24606285

ABSTRACT

The Speech Reception Threshold [SRT, (dB SNR)] is often used as an outcome measure to quantify the acuity for speech perception in noise. The majority of studies observe speech intelligibility in noise at a fixed noise level. However, the observed SNR might be an ambiguous outcome measure because it is dependent on the sensation level (SL) of the noise in the case of a non-stationary noise. Due to their higher thresholds, hearing-impaired listeners are usually tested at a different SL compared to normal-hearing listeners. Therefore, the observed SNR "itself" might not be a robust outcome measure to characterize the differences in performance between normal-hearing and hearing-impaired listeners, within and between different studies. In this paper, the SRTs are measured at a fixed absolute noise level (80 dBA) and at a fixed SL (25 dB). The results are discussed and described with an extension to the SRT model of Plomp [(1986). "A signal-to-noise ratio model for the speech-receptionthreshold of the hearing-impaired," J. Speech Hear. Res. 29, 146-154] and the Extended Speech Intelligibility Index. In addition, two alternative outcome measures are proposed which are, in contrast to the SNR, independent of the noise level. These outcome measures are able to characterize the SRT performance in fluctuating noise in a more uniform and unambiguous way.


Subject(s)
Noise/adverse effects , Perceptual Masking , Persons With Hearing Impairments/psychology , Speech Perception , Speech Reception Threshold Test , Acoustic Stimulation , Adult , Aged , Auditory Threshold , Case-Control Studies , Female , Humans , Male , Middle Aged , Predictive Value of Tests , Speech Intelligibility , Young Adult
19.
Int J Audiol ; 49(11): 856-65, 2010 Nov.
Article in English | MEDLINE | ID: mdl-20936997

ABSTRACT

The extended speech intelligibility index (ESII) model (Rhebergen et al, 2006) forms an upgrade to the conventional speech intelligibility index model. For normal-hearing listeners the ESII model is able to predict the speech reception threshold (SRT) in both stationary and non-stationary noise maskers. In this paper, a first attempt is made to evaluate the ESII with SRT data obtained by de Laat and Plomp (1983), and Versfeld and Dreschler (2002) of hearing-impaired listeners in stationary, 10-Hz interrupted, and non-stationary speech-shaped noise measured at different noise levels. The results show that the ESII model is able to describe the SRT in different non-stationary noises for normal-hearing listeners at different noise levels reasonably well. However, the ESII model is less successful in the case of predicting the SRT in non-stationary noise for hearing-impaired subjects. As long as the present audibility models cannot describe the auditory processing in a listener with cochlear hearing loss accurately, it is difficult to distinguish between raised SRTs due to supra-threshold deficits or factors such as cognition, age, and language skills.


Subject(s)
Hearing Loss/diagnosis , Models, Biological , Noise , Speech Reception Threshold Test , Adolescent , Adult , Aged , Humans , Middle Aged , Young Adult
20.
J Acoust Soc Am ; 127(3): 1570-83, 2010 Mar.
Article in English | MEDLINE | ID: mdl-20329857

ABSTRACT

The speech intelligibility index (SII) is an often used calculation method for estimating the proportion of audible speech in noise. For speech reception thresholds (SRTs), measured in normally hearing listeners using various types of stationary noise, this model predicts a fairly constant speech proportion of about 0.33, necessary for Dutch sentence intelligibility. However, when the SII model is applied for SRTs in quiet, the estimated speech proportions are often higher, and show a larger inter-subject variability, than found for speech in noise near normal speech levels [65 dB sound pressure level (SPL)]. The present model attempts to alleviate this problem by including cochlear compression. It is based on a loudness model for normally hearing and hearing-impaired listeners of Moore and Glasberg [(2004). Hear. Res. 188, 70-88]. It estimates internal excitation levels for speech and noise and then calculates the proportion of speech above noise and threshold using similar spectral weighting as used in the SII. The present model and the standard SII were used to predict SII values in quiet and in stationary noise for normally hearing and hearing-impaired listeners. The present model predicted SIIs for three listener types (normal hearing, noise-induced, and age-induced hearing loss) with markedly less variability than the standard SII.


Subject(s)
Hearing Loss, Noise-Induced/physiopathology , Hearing/physiology , Models, Biological , Presbycusis/physiopathology , Speech Intelligibility/physiology , Speech Perception/physiology , Acoustic Stimulation , Adolescent , Adult , Audiometry , Auditory Threshold/physiology , Cochlea/physiology , Humans , Loudness Perception/physiology , Middle Aged , Noise , Speech Acoustics , Telephone , Young Adult
SELECTION OF CITATIONS
SEARCH DETAIL
...