Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 5.515
Filter
1.
Trends Hear ; 28: 23312165241260029, 2024.
Article in English | MEDLINE | ID: mdl-38831646

ABSTRACT

The extent to which active noise cancelation (ANC), when combined with hearing assistance, can improve speech intelligibility in noise is not well understood. One possible source of benefit is ANC's ability to reduce the sound level of the direct (i.e., vent-transmitted) path. This reduction lowers the "floor" imposed by the direct path, thereby allowing any increases to the signal-to-noise ratio (SNR) created in the amplified path to be "realized" at the eardrum. Here we used a modeling approach to estimate this benefit. We compared pairs of simulated hearing aids that differ only in terms of their ability to provide ANC and computed intelligibility metrics on their outputs. The difference in metric scores between simulated devices is termed the "ANC Benefit." These simulations show that ANC Benefit increases as (1) the environmental sound level increases, (2) the ability of the hearing aid to improve SNR increases, (3) the strength of the ANC increases, and (4) the hearing loss severity decreases. The predicted size of the ANC Benefit can be substantial. For a moderate hearing loss, the model predicts improvement in intelligibility metrics of >30% when environments are moderately loud (>70 dB SPL) and devices are moderately capable of increasing SNR (by >4 dB). It appears that ANC can be a critical ingredient in hearing devices that attempt to improve SNR in loud environments. ANC will become more and more important as advanced SNR-improving algorithms (e.g., artificial intelligence speech enhancement) are included in hearing devices.


Subject(s)
Hearing Aids , Noise , Perceptual Masking , Signal-To-Noise Ratio , Speech Intelligibility , Speech Perception , Humans , Noise/adverse effects , Computer Simulation , Acoustic Stimulation , Correction of Hearing Impairment/instrumentation , Persons With Hearing Impairments/rehabilitation , Persons With Hearing Impairments/psychology , Hearing Loss/diagnosis , Hearing Loss/rehabilitation , Hearing Loss/physiopathology , Equipment Design , Signal Processing, Computer-Assisted
2.
Sci Rep ; 14(1): 13241, 2024 Jun 09.
Article in English | MEDLINE | ID: mdl-38853168

ABSTRACT

Cochlear implants (CIs) do not offer the same level of effectiveness in noisy environments as in quiet settings. Current single-microphone noise reduction algorithms in hearing aids and CIs only remove predictable, stationary noise, and are ineffective against realistic, non-stationary noise such as multi-talker interference. Recent developments in deep neural network (DNN) algorithms have achieved noteworthy performance in speech enhancement and separation, especially in removing speech noise. However, more work is needed to investigate the potential of DNN algorithms in removing speech noise when tested with listeners fitted with CIs. Here, we implemented two DNN algorithms that are well suited for applications in speech audio processing: (1) recurrent neural network (RNN) and (2) SepFormer. The algorithms were trained with a customized dataset ( ∼ 30 h), and then tested with thirteen CI listeners. Both RNN and SepFormer algorithms significantly improved CI listener's speech intelligibility in noise without compromising the perceived quality of speech overall. These algorithms not only increased the intelligibility in stationary non-speech noise, but also introduced a substantial improvement in non-stationary noise, where conventional signal processing strategies fall short with little benefits. These results show the promise of using DNN algorithms as a solution for listening challenges in multi-talker noise interference.


Subject(s)
Algorithms , Cochlear Implants , Deep Learning , Noise , Speech Intelligibility , Humans , Female , Middle Aged , Male , Speech Perception/physiology , Aged , Adult , Neural Networks, Computer
3.
J Acoust Soc Am ; 155(5): 2934-2947, 2024 May 01.
Article in English | MEDLINE | ID: mdl-38717201

ABSTRACT

Spatial separation and fundamental frequency (F0) separation are effective cues for improving the intelligibility of target speech in multi-talker scenarios. Previous studies predominantly focused on spatial configurations within the frontal hemifield, overlooking the ipsilateral side and the entire median plane, where localization confusion often occurs. This study investigated the impact of spatial and F0 separation on intelligibility under the above-mentioned underexplored spatial configurations. The speech reception thresholds were measured through three experiments for scenarios involving two to four talkers, either in the ipsilateral horizontal plane or in the entire median plane, utilizing monotonized speech with varying F0s as stimuli. The results revealed that spatial separation in symmetrical positions (front-back symmetry in the ipsilateral horizontal plane or front-back, up-down symmetry in the median plane) contributes positively to intelligibility. Both target direction and relative target-masker separation influence the masking release attributed to spatial separation. As the number of talkers exceeds two, the masking release from spatial separation diminishes. Nevertheless, F0 separation remains as a remarkably effective cue and could even facilitate spatial separation in improving intelligibility. Further analysis indicated that current intelligibility models encounter difficulties in accurately predicting intelligibility in scenarios explored in this study.


Subject(s)
Cues , Perceptual Masking , Sound Localization , Speech Intelligibility , Speech Perception , Humans , Female , Male , Young Adult , Adult , Speech Perception/physiology , Acoustic Stimulation , Auditory Threshold , Speech Acoustics , Speech Reception Threshold Test , Noise
4.
J Acoust Soc Am ; 155(5): 3060-3070, 2024 May 01.
Article in English | MEDLINE | ID: mdl-38717210

ABSTRACT

Speakers tailor their speech to different types of interlocutors. For example, speech directed to voice technology has different acoustic-phonetic characteristics than speech directed to a human. The present study investigates the perceptual consequences of human- and device-directed registers in English. We compare two groups of speakers: participants whose first language is English (L1) and bilingual L1 Mandarin-L2 English talkers. Participants produced short sentences in several conditions: an initial production and a repeat production after a human or device guise indicated either understanding or misunderstanding. In experiment 1, a separate group of L1 English listeners heard these sentences and transcribed the target words. In experiment 2, the same productions were transcribed by an automatic speech recognition (ASR) system. Results show that transcription accuracy was highest for L1 talkers for both human and ASR transcribers. Furthermore, there were no overall differences in transcription accuracy between human- and device-directed speech. Finally, while human listeners showed an intelligibility benefit for coda repair productions, the ASR transcriber did not benefit from these enhancements. Findings are discussed in terms of models of register adaptation, phonetic variation, and human-computer interaction.


Subject(s)
Multilingualism , Speech Intelligibility , Speech Perception , Humans , Male , Female , Adult , Young Adult , Speech Acoustics , Phonetics , Speech Recognition Software
5.
J Speech Lang Hear Res ; 67(6): 1945-1963, 2024 Jun 06.
Article in English | MEDLINE | ID: mdl-38749011

ABSTRACT

PURPOSE: The Chinese Emotional Speech Audiometry Project (CESAP) aims to establish a new material set for Chinese speech audiometry tests, which can be used in both neutral and emotional prosody settings. As the first endeavor of CESAP, this study demonstrates the development of the material foundation and reports its validation in neutral prosody. METHOD: In the development step, 40 phonetically balanced word lists consisting of 30 Chinese disyllabic words with neutral valence were first generated. In a following affective rating experiment, 35 word lists were qualified for validation based on the familiarity and valence ratings from 30 normal-hearing (NH) participants. For validation, performance-intensity functions of each word list were fitted with responses from 60 NH subjects under six presentation levels (-1, 3, 5, 7, 11, and 20 dB HL). The final material set was determined by the intelligibility scores at each decibel level and the mean slopes. RESULTS: First, 35 lists satisfied the criteria of phonetic balance, limited repetitions, high familiarity, and neutral valence and were selected for validation. Second, 15 lists were compiled in the final material set based on the pairwise differences in intelligibility scores and the fitted 20%-80% slopes. The established material set had high reliability and validity and was sensitive to detect intelligibility changes (50% slope: 6.20%/dB; 20%-80% slope: 5.45%/dB), with small covariance of variation for thresholds (15%), 50% slope (12%), and 20%-80% slope (12%). CONCLUSION: Our final material set of 15 word lists takes the initiative to control the emotional aspect of audiometry tests, which enriches available Mandarin speech recognition materials and warrants future assessments in emotional prosody among populations with hearing impairments. SUPPLEMENTAL MATERIAL: https://doi.org/10.23641/asha.25742814.


Subject(s)
Audiometry, Speech , Emotions , Humans , Female , Male , Adult , Young Adult , Audiometry, Speech/methods , Reproducibility of Results , China , Speech Perception , Phonetics , Language , Speech Intelligibility , East Asian People
6.
Hear Res ; 448: 109031, 2024 Jul.
Article in English | MEDLINE | ID: mdl-38761554

ABSTRACT

In recent studies, psychophysiological measures have been used as markers of listening effort, but there is limited research on the effect of hearing loss on such measures. The aim of the current study was to investigate the effect of hearing acuity on physiological responses and subjective measures acquired during different levels of listening demand, and to investigate the relationship between these measures. A total of 125 participants (37 males and 88 females, age range 37-72 years, pure-tone average hearing thresholds at the best ear between -5.0 to 68.8 dB HL and asymmetry between ears between 0.0 and 87.5 dB) completed a listening task. A speech reception threshold (SRT) test was used with target sentences spoken by a female voice masked by male speech. Listening demand was manipulated using three levels of intelligibility: 20 % correct speech recognition, 50 %, and 80 % (IL20 %/IL50 %/IL80 %, respectively). During the task, peak pupil dilation (PPD), heart rate (HR), pre-ejection period (PEP), respiratory sinus arrhythmia (RSA), and skin conductance level (SCL) were measured. For each condition, subjective ratings of effort, performance, difficulty, and tendency to give up were also collected. Linear mixed effects models tested the effect of intelligibility level, hearing acuity, hearing asymmetry, and tinnitus complaints on the physiological reactivity (compared to baseline) and subjective measures. PPD and PEP reactivity showed a non-monotonic relationship with intelligibility level, but no such effects were found for HR, RSA, or SCL reactivity. Participants with worse hearing acuity had lower PPD at all intelligibility levels and showed lower PEP baseline levels. Additionally, PPD and SCL reactivity were lower for participants who reported suffering from tinnitus complaints. For IL80 %, but not IL50 % or IL20 %, participants with worse hearing acuity rated their listening effort to be relatively high compared to participants with better hearing. The reactivity of the different physiological measures were not or only weakly correlated with each other. Together, the results suggest that hearing acuity may be associated with altered sympathetic nervous system (re)activity. Research using psychophysiological measures as markers of listening effort to study the effect of hearing acuity on such measures are best served by the use of the PPD and PEP.


Subject(s)
Auditory Threshold , Hearing , Heart Rate , Speech Intelligibility , Speech Perception , Speech Reception Threshold Test , Humans , Male , Female , Middle Aged , Adult , Aged , Audiometry, Pure-Tone , Acoustic Stimulation , Perceptual Masking , Galvanic Skin Response , Pupil/physiology , Persons With Hearing Impairments/psychology
7.
J Acoust Soc Am ; 155(4): 2482-2491, 2024 Apr 01.
Article in English | MEDLINE | ID: mdl-38587430

ABSTRACT

Despite a vast literature on how speech intelligibility is affected by hearing loss and advanced age, remarkably little is known about the perception of talker-related information in these populations. Here, we assessed the ability of listeners to detect whether a change in talker occurred while listening to and identifying sentence-length sequences of words. Participants were recruited in four groups that differed in their age (younger/older) and hearing status (normal/impaired). The task was conducted in quiet or in a background of same-sex two-talker speech babble. We found that age and hearing loss had detrimental effects on talker change detection, in addition to their expected effects on word recognition. We also found subtle differences in the effects of age and hearing loss for trials in which the talker changed vs trials in which the talker did not change. These findings suggest that part of the difficulty encountered by older listeners, and by listeners with hearing loss, when communicating in group situations, may be due to a reduced ability to identify and discriminate between the participants in the conversation.


Subject(s)
Deafness , Hearing Loss , Humans , Hearing Loss/diagnosis , Speech Intelligibility
8.
J Acoust Soc Am ; 155(4): 2849-2859, 2024 Apr 01.
Article in English | MEDLINE | ID: mdl-38682914

ABSTRACT

The context-based Extended Speech Transmission Index (cESTI) (van Schoonhoven et al., 2022, J. Acoust. Soc. Am. 151, 1404-1415) was successfully applied to predict the intelligibility of monosyllabic words with different degrees of context in interrupted noise. The current study aimed to use the same model for the prediction of sentence intelligibility in different types of non-stationary noise. The necessary context factors and transfer functions were based on values found in existing literature. The cESTI performed similar to or better than the original ESTI when noise had speech-like characteristics. We hypothesize that the remaining inaccuracies in model predictions can be attributed to the limits of the modelling approach with regard to mechanisms, such as modulation masking and informational masking.


Subject(s)
Noise , Perceptual Masking , Speech Intelligibility , Speech Perception , Humans , Perceptual Masking/physiology , Female , Speech Perception/physiology , Male , Adult , Young Adult , Speech Acoustics , Models, Theoretical , Acoustic Stimulation
9.
Trends Hear ; 28: 23312165241240572, 2024.
Article in English | MEDLINE | ID: mdl-38676325

ABSTRACT

Realistic outcome measures that reflect everyday hearing challenges are needed to assess hearing aid and cochlear implant (CI) fitting. Literature suggests that listening effort measures may be more sensitive to differences between hearing-device settings than established speech intelligibility measures when speech intelligibility is near maximum. Which method provides the most effective measurement of listening effort for this purpose is currently unclear. This study aimed to investigate the feasibility of two tests for measuring changes in listening effort in CI users due to signal-to-noise ratio (SNR) differences, as would arise from different hearing-device settings. By comparing the effect size of SNR differences on listening effort measures with test-retest differences, the study evaluated the suitability of these tests for clinical use. Nineteen CI users underwent two listening effort tests at two SNRs (+4 and +8 dB relative to individuals' 50% speech perception threshold). We employed dual-task paradigms-a sentence-final word identification and recall test (SWIRT) and a sentence verification test (SVT)-to assess listening effort at these two SNRs. Our results show a significant difference in listening effort between the SNRs for both test methods, although the effect size was comparable to the test-retest difference, and the sensitivity was not superior to speech intelligibility measures. Thus, the implementations of SVT and SWIRT used in this study are not suitable for clinical use to measure listening effort differences of this magnitude in individual CI users. However, they can be used in research involving CI users to analyze group data.


Subject(s)
Cochlear Implantation , Cochlear Implants , Feasibility Studies , Persons With Hearing Impairments , Speech Intelligibility , Speech Perception , Humans , Male , Female , Speech Perception/physiology , Middle Aged , Aged , Speech Intelligibility/physiology , Cochlear Implantation/instrumentation , Persons With Hearing Impairments/rehabilitation , Persons With Hearing Impairments/psychology , Reproducibility of Results , Acoustic Stimulation , Signal-To-Noise Ratio , Adult , Aged, 80 and over , Auditory Threshold/physiology , Predictive Value of Tests , Correction of Hearing Impairment/instrumentation , Noise/adverse effects
10.
Trends Hear ; 28: 23312165241246616, 2024.
Article in English | MEDLINE | ID: mdl-38656770

ABSTRACT

Negativity bias is a cognitive bias that results in negative events being perceptually more salient than positive ones. For hearing care, this means that hearing aid benefits can potentially be overshadowed by adverse experiences. Research has shown that sustaining focus on positive experiences has the potential to mitigate negativity bias. The purpose of the current study was to investigate whether a positive focus (PF) intervention can improve speech-in-noise abilities for experienced hearing aid users. Thirty participants were randomly allocated to a control or PF group (N = 2 × 15). Prior to hearing aid fitting, all participants filled out the short form of the Speech, Spatial and Qualities of Hearing scale (SSQ12) based on their own hearing aids. At the first visit, they were fitted with study hearing aids, and speech-in-noise testing was performed. Both groups then wore the study hearing aids for two weeks and sent daily text messages reporting hours of hearing aid use to an experimenter. In addition, the PF group was instructed to focus on positive listening experiences and to also report them in the daily text messages. After the 2-week trial, all participants filled out the SSQ12 questionnaire based on the study hearing aids and completed the speech-in-noise testing again. Speech-in-noise performance and SSQ12 Qualities score were improved for the PF group but not for the control group. This finding indicates that the PF intervention can improve subjective and objective hearing aid benefits.


Subject(s)
Correction of Hearing Impairment , Hearing Aids , Noise , Persons With Hearing Impairments , Speech Intelligibility , Speech Perception , Humans , Male , Female , Aged , Noise/adverse effects , Middle Aged , Correction of Hearing Impairment/instrumentation , Persons With Hearing Impairments/rehabilitation , Persons With Hearing Impairments/psychology , Perceptual Masking , Hearing Loss/rehabilitation , Hearing Loss/psychology , Hearing Loss/diagnosis , Audiometry, Speech , Surveys and Questionnaires , Aged, 80 and over , Time Factors , Acoustic Stimulation , Hearing , Treatment Outcome
11.
Am J Audiol ; 33(2): 442-454, 2024 Jun 04.
Article in English | MEDLINE | ID: mdl-38557158

ABSTRACT

PURPOSE: This study examined children's ability to perceive speech from multiple locations on the horizontal plane. Children with hearing loss were compared to normal-hearing peers while using amplification with and without advanced noise management. METHOD: Participants were 21 children with normal hearing (9-15 years) and 12 children with moderate symmetrical hearing loss (11-15 years). Word recognition, nonword detection, and word recall were assessed. Stimuli were presented randomly from multiple discrete locations in multitalker noise. Children with hearing loss were fit with devices having separate omnidirectional and noise management programs. The noise management feature is designed to preserve audibility in noise by rapidly analyzing input from all locations and reducing the noise management when speech is detected from locations around the hearing aid user. RESULTS: Significant effects of left/right and front/back lateralization occurred as well as effects of hearing loss and hearing aid noise management. Children with normal hearing experienced a left-side advantage for word recognition and a right-side advantage for nonword detection. Children with hearing loss demonstrated poorer performance overall on all tasks with better word recognition from the back, and word recall from the right, in the omnidirectional condition. With noise management, performance improved from the front compared to the back for all three tasks and from the right for word recognition and word recall. CONCLUSIONS: The shape of children's local speech intelligibility on the horizontal plane is not omnidirectional. It is task dependent and shaped further by hearing loss and hearing aid signal processing. Front/back shifts in children with hearing loss are consistent with the behavior of hearing aid noise management, while the right-side biases observed in both groups are consistent with the effects of specialized speech processing in the left hemisphere of the brain.


Subject(s)
Hearing Aids , Noise , Speech Intelligibility , Speech Perception , Humans , Child , Adolescent , Male , Female , Case-Control Studies , Sound Localization , Hearing Loss, Sensorineural/rehabilitation , Hearing Loss, Sensorineural/physiopathology
12.
J Acoust Soc Am ; 155(3): 2151-2168, 2024 Mar 01.
Article in English | MEDLINE | ID: mdl-38501923

ABSTRACT

Cochlear implant (CI) recipients often struggle to understand speech in reverberant environments. Speech enhancement algorithms could restore speech perception for CI listeners by removing reverberant artifacts from the CI stimulation pattern. Listening studies, either with cochlear-implant recipients or normal-hearing (NH) listeners using a CI acoustic model, provide a benchmark for speech intelligibility improvements conferred by the enhancement algorithm but are costly and time consuming. To reduce the associated costs during algorithm development, speech intelligibility could be estimated offline using objective intelligibility measures. Previous evaluations of objective measures that considered CIs primarily assessed the combined impact of noise and reverberation and employed highly accurate enhancement algorithms. To facilitate the development of enhancement algorithms, we evaluate twelve objective measures in reverberant-only conditions characterized by a gradual reduction of reverberant artifacts, simulating the performance of an enhancement algorithm during development. Measures are validated against the performance of NH listeners using a CI acoustic model. To enhance compatibility with reverberant CI-processed signals, measure performance was assessed after modifying the reference signal and spectral filterbank. Measures leveraging the speech-to-reverberant ratio, cepstral distance and, after modifying the reference or filterbank, envelope correlation are strong predictors of intelligibility for reverberant CI-processed speech.


Subject(s)
Cochlear Implantation , Cochlear Implants , Speech Intelligibility , Algorithms , Hearing
13.
J Speech Lang Hear Res ; 67(4): 1090-1106, 2024 Apr 08.
Article in English | MEDLINE | ID: mdl-38498664

ABSTRACT

PURPOSE: This study examined speech changes induced by deep-brain stimulation (DBS) in speakers with Parkinson's disease (PD) using a set of auditory-perceptual and acoustic measures. METHOD: Speech recordings from nine speakers with PD and DBS were compared between DBS-On and DBS-Off conditions using auditory-perceptual and acoustic analyses. Auditory-perceptual ratings included voice quality, articulation precision, prosody, speech intelligibility, and listening effort obtained from 44 listeners. Acoustic measures were made for voicing proportion, second formant frequency slope, vowel dispersion, articulation rate, and range of fundamental frequency and intensity. RESULTS: No significant changes were found between DBS-On and DBS-Off for the five perceptual ratings. Four of six acoustic measures revealed significant differences between the two conditions. While articulation rate and acoustic vowel dispersion increased, voicing proportion and intensity range decreased from the DBS-Off to DBS-On condition. However, a visual examination of the data indicated that the statistical significance was mostly driven by a small number of participants, while the majority did not show a consistent pattern of such changes. CONCLUSIONS: Our data, in general, indicate no-to-minimal changes in speech production ensued from DBS stimulation. The findings are discussed with a focus on large interspeaker variability in PD in terms of their speech characteristics and the potential effects of DBS on speech.


Subject(s)
Deep Brain Stimulation , Parkinson Disease , Humans , Acoustics , Speech Intelligibility/physiology , Voice Quality , Parkinson Disease/complications , Parkinson Disease/therapy , Brain , Speech Acoustics
14.
Int J Pediatr Otorhinolaryngol ; 179: 111918, 2024 Apr.
Article in English | MEDLINE | ID: mdl-38518421

ABSTRACT

INTRODUCTION: A cleft palate is a common type of facial malformation. Compensatory articulation errors are one of the important causes of unclear speech in children with cleft palate. Tele-practice (TP) helps to connect therapists and clients for assessment and therapy. Our goal is to investigate the effectiveness of articulation therapy through tele-practice on cleft palate children in Khuzestan Province during the COVID-19 pandemic. MATERIALS & METHODS: Before starting the treatment, a 20-min speech sample was recorded individually from all the children. Speech intelligibility and the percentage of correct consonants were assessed for each speech sample. The control group received treatment sessions in person at the cleft palate center, and the other group received treatment via tele-practice using the ZOOM platform. Treatment sessions were provided in the form of 45-60-min group sessions, twice a week, for 5 weeks (10 sessions in total). After 10 treatment sessions, the speech sample was recorded again. The level of parental satisfaction was measured using a Likert 5-level survey. RESULTS: The mean score of intelligibility of the two groups decreased (-1.4400 and 0.7200). The two groups' mean percentage of correct consonants increased. (26.09 and 17.90). In both groups, the mean score of parents' satisfaction with the treatment was high (3.44 and 3.84). The mean of difference before and after the speech intelligibility and the percentage of correct consonants variables in both groups was statistically significant (P = 0.001 and P = 0.002, respectively). In both groups, the satisfaction variable was not associated with a statistically significant difference (P = 0.067). CONCLUSION: The effectiveness of in-person therapy over a certain period of time is higher than tele-practice. Nevertheless, the results demonstrated an increase in the intelligibility of speech and the percentage of correct consonants in both groups, thus proving the effectiveness of articulation therapy in correcting compensatory articulation errors in children with cleft palate through in-person and tele-practice.


Subject(s)
COVID-19 , Cleft Lip , Cleft Palate , Child , Humans , Cleft Palate/therapy , Cleft Palate/complications , Pandemics , Articulation Disorders/etiology , COVID-19/complications , Speech Intelligibility , Speech , Cleft Lip/complications
15.
J Acoust Soc Am ; 155(3): 1916-1927, 2024 Mar 01.
Article in English | MEDLINE | ID: mdl-38456734

ABSTRACT

Speech quality is one of the main foci of speech-related research, where it is frequently studied with speech intelligibility, another essential measurement. Band-level perceptual speech intelligibility, however, has been studied frequently, whereas speech quality has not been thoroughly analyzed. In this paper, a Multiple Stimuli With Hidden Reference and Anchor (MUSHRA) inspired approach was proposed to study the individual robustness of frequency bands to noise with perceptual speech quality as the measure. Speech signals were filtered into thirty-two frequency bands with compromising real-world noise employed at different signal-to-noise ratios. Robustness to noise indices of individual frequency bands was calculated based on the human-rated perceptual quality scores assigned to the reconstructed noisy speech signals. Trends in the results suggest the mid-frequency region appeared less robust to noise in terms of perceptual speech quality. These findings suggest future research aiming at improving speech quality should pay more attention to the mid-frequency region of the speech signals accordingly.


Subject(s)
Speech Perception , Humans , Perceptual Masking , Noise/adverse effects , Speech Intelligibility , Speech Acoustics
16.
Otol Neurotol ; 45(5): e385-e392, 2024 Jun 01.
Article in English | MEDLINE | ID: mdl-38518764

ABSTRACT

HYPOTHESIS: The behaviorally based programming with loudness balancing (LB) would result in better speech understanding, spectral-temporal resolution, and music perception scores, and there would be a relationship between these scores. BACKGROUND: Loudness imbalances at upper stimulation levels may cause sounds to be perceived as irregular, gravelly, or overly echoed and may negatively affect the listening performance of the cochlear implant (CI) user. LB should be performed after fitting to overcome these problems. METHODS: The study included 26 unilateral Med-EL CI users. Two different CI programs based on the objective electrically evoked stapedial reflex threshold (P1) and the behaviorally program with LB (P2) were recorded for each participant. The Turkish Matrix Sentence Test (TMS) was applied to evaluate speech perception; the Random Gap Detection Test (RGDT) and Spectral-Temporally Modulated Ripple Test (SMRT) were applied to evaluate spectral temporal resolution skills; the Mini Profile of Music Perception Skills (mini-PROMS) and Melodic Contour Identification (MCI) tests were applied to evaluate music perception, and the results were compared. RESULTS: Significantly better scores were obtained with P2 in TMS tests performed in noise and quiet. SMRT scores were significantly correlated with TMS in quiet and noise, and mini-PROMS sound perception results. Although better scores were obtained with P2 in the mini-PROMS total score and MCI, a significant difference was found only for MCI. CONCLUSION: The data from the current study showed that equalization of loudness across CI electrodes leads to better perceptual acuity. It also revealed the relationship between speech perception, spectral-temporal resolution, and music perception.


Subject(s)
Cochlear Implantation , Cochlear Implants , Music , Speech Perception , Humans , Male , Female , Middle Aged , Adult , Speech Perception/physiology , Cochlear Implantation/methods , Speech Intelligibility/physiology , Aged , Auditory Perception/physiology , Loudness Perception/physiology , Young Adult
17.
Am J Speech Lang Pathol ; 33(3): 1485-1503, 2024 May.
Article in English | MEDLINE | ID: mdl-38512040

ABSTRACT

PURPOSE: Motor deficits are widely documented among autistic individuals, and speech characteristics consistent with a motor speech disorder have been reported in prior literature. We conducted an auditory-perceptual analysis of speech production skills in low and minimally verbal autistic individuals as a step toward clarifying the nature of speech production impairments in this population and the potential link between oromotor functioning and language development. METHOD: Fifty-four low or minimally verbal autistic individuals aged 4-18 years were video-recorded performing nonspeech oromotor tasks and producing phonemes, syllables, and words in imitation. Three trained speech-language pathologists provided auditory perceptual ratings of 11 speech features reflecting speech subsystem performance and overall speech production ability. The presence, attributes, and severity of signs of oromotor dysfunction were analyzed, as were relative performance on nonspeech and speech tasks and correlations between perceptual speech features and language skills. RESULTS AND CONCLUSIONS: Our findings provide evidence of a motor speech disorder in this population, characterized by perceptual speech features including reduced intelligibility, decreased consonant and vowel precision, and impairments of speech coordination and consistency. Speech deficits were more associated with articulation than with other speech subsystems. Speech production was more impaired than nonspeech oromotor abilities in a subgroup of the sample. Oromotor deficits were significantly associated with expressive and receptive language skills. Findings are interpreted in the context of known characteristics of the pediatric motor speech disorders childhood apraxia of speech and childhood dysarthria. These results, if replicated in future studies, have significant potential to improve the early detection of language impairments, inform the development of speech and language interventions, and aid in the identification of neurobiological mechanisms influencing communication development.


Subject(s)
Speech Intelligibility , Humans , Child , Child, Preschool , Male , Adolescent , Female , Speech Perception , Speech Production Measurement , Autistic Disorder/psychology , Autistic Disorder/complications , Autistic Disorder/diagnosis , Video Recording , Speech Disorders/diagnosis , Speech Disorders/physiopathology , Speech-Language Pathology/methods , Articulation Disorders/diagnosis
18.
Eur Arch Otorhinolaryngol ; 281(6): 3227-3235, 2024 Jun.
Article in English | MEDLINE | ID: mdl-38546852

ABSTRACT

PURPOSE: The primary aim of this research study is to assess whether differences exist in the application of the NAL-NL2 and DSL v.5 prescription formulas in terms of speech-in-noise intelligibility. METHODS: Data from 43 patients, were retrospectively evaluated and analyzed. Inclusion criteria were patients with bilateral conductive, sensorineural, or mixed hearing loss, already using hearing aids for at least 1 year, and aged 18 years or older. Patients were categorized into two groups based on the prescriptive method employed by the hearing aid: NAL-NL2 or DSL v.5. Pure tone audiometry, speech audiometry, free field pure tone and speech audiometry with the hearing aid, and Matrix sentence test were performed. The Abbreviated Profile of Hearing Aid Benefit (APHAB) questionnaire was used to assess the personal audiological benefit provided by the hearing aid. RESULTS: No statistically significant differences were found comparing the free-field pure tone average (FF PTA) and the free-field Word Recognition Score (FF WRS). Comparing the Speech Reception Threshold (SRT) parameter of patients with NAL-NL2 vs DSL v.5, no statistically significant difference was found, thus highlighting a condition of comparability between the two prescription methods in terms of speech-in-noise intelligibility. Comparing the results of the APHAB questionnaire, no statistically significant differences were evident for all subscales and overall benefit. When conducting a comparison between male and female patients using the NAL-NL2 method, no differences were observed in SRT values, however, the APHAB questionnaire revealed a difference in the AV subscale score for the same subjects. CONCLUSION: Our analysis revealed no statistically significant differences in speech-in-noise intelligibility, as measured by the SRT values from the Matrix Sentence Test, when comparing the two prescriptive methods. This compelling result reinforces the notion that, functionally, both methods are comparably effective in enhancing speech intelligibility in real-world, noisy environments. However, it is crucial to underscore that the absence of differences does not diminish the importance of considering individual patient needs and preferences in the selection of a prescriptive method.


Subject(s)
Hearing Aids , Noise , Speech Intelligibility , Humans , Male , Female , Middle Aged , Retrospective Studies , Aged , Adult , Audiometry, Pure-Tone , Speech Perception , Audiometry, Speech/methods , Surveys and Questionnaires , Aged, 80 and over
19.
Trends Hear ; 28: 23312165241232551, 2024.
Article in English | MEDLINE | ID: mdl-38549351

ABSTRACT

In daily life, both acoustic factors and social context can affect listening effort investment. In laboratory settings, information about listening effort has been deduced from pupil and cardiovascular responses independently. The extent to which these measures can jointly predict listening-related factors is unknown. Here we combined pupil and cardiovascular features to predict acoustic and contextual aspects of speech perception. Data were collected from 29 adults (mean  =  64.6 years, SD  =  9.2) with hearing loss. Participants performed a speech perception task at two individualized signal-to-noise ratios (corresponding to 50% and 80% of sentences correct) and in two social contexts (the presence and absence of two observers). Seven features were extracted per trial: baseline pupil size, peak pupil dilation, mean pupil dilation, interbeat interval, blood volume pulse amplitude, pre-ejection period and pulse arrival time. These features were used to train k-nearest neighbor classifiers to predict task demand, social context and sentence accuracy. The k-fold cross validation on the group-level data revealed above-chance classification accuracies: task demand, 64.4%; social context, 78.3%; and sentence accuracy, 55.1%. However, classification accuracies diminished when the classifiers were trained and tested on data from different participants. Individually trained classifiers (one per participant) performed better than group-level classifiers: 71.7% (SD  =  10.2) for task demand, 88.0% (SD  =  7.5) for social context, and 60.0% (SD  =  13.1) for sentence accuracy. We demonstrated that classifiers trained on group-level physiological data to predict aspects of speech perception generalized poorly to novel participants. Individually calibrated classifiers hold more promise for future applications.


Subject(s)
Pupil , Speech Perception , Humans , Pupil/physiology , Speech Intelligibility/physiology , Speech Perception/physiology , Middle Aged , Aged
20.
Auris Nasus Larynx ; 51(3): 537-541, 2024 Jun.
Article in English | MEDLINE | ID: mdl-38537556

ABSTRACT

OBJECTIVE: To reveal differences in error pattern of phonemes and articulation between children using cochlear implants (CIs) and those using hearing aids (HAs) due to prelingual hearing disorder and help the education of children with prelingual hearing loss. METHOD: Children with prelingual hearing loss who were receiving auditory-verbal preschool education at an auditory center for hearing-impaired children (Fujimidai Auditory Center, Tokyo, Japan) from 2010 to 2020 were analyzed retrospectively. All participants underwent pure tone audiometry and monosyllabic intelligibility tests. The error answers were categorized into five patterns which was characterized by the substitution, addition, omission, failure, and no response according to consonant errors. In addition, the consonant errors classified into the manner of articulation and the differences of error patterns were analyzed between the HA and the CI group descriptively. RESULTS: A total of 43 children with bilateral HAs and 46 children with bimodal CIs or bilateral CIs were enrolled. No significant between-group differences in median phoneme intelligibility were found. The most common error pattern was substitution in both HA and CI groups. The error number of addition pattern in the HA group was smaller than in the CI group. In both groups, the most common errors of articulation were flap errors, and the most common error patterns were flaps to nasals, nasals to nasals, plosives to plosives. In the HA group, plosives and nasals tended not to be recognized and in the CI group plosives were prone to be added to vowels. CONCLUSIONS: There were some different error patterns of articulation and consonant substitution between groups. Clarifying differences of phoneme that are difficult to hear and tend to be misheard would help for creating an effective approach to auditory training for children with hearing loss.


Subject(s)
Cochlear Implants , Hearing Aids , Speech Intelligibility , Humans , Male , Female , Child, Preschool , Retrospective Studies , Child , Phonetics , Hearing Loss/rehabilitation , Cochlear Implantation , Audiometry, Pure-Tone , Speech Perception
SELECTION OF CITATIONS
SEARCH DETAIL
...