Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 422
Filter
1.
Front Hum Neurosci ; 18: 1406916, 2024.
Article in English | MEDLINE | ID: mdl-38974481

ABSTRACT

Background: For adults with auditory processing disorder (APD), listening and communicating can be difficult, potentially leading to social isolation, depression, employment difficulties and certainly reducing the quality of life. Despite existing practice guidelines suggesting treatments, the efficacy of these interventions remains uncertain due to a lack of comprehensive reviews. This systematic review and meta-analysis aim to establish current evidence on the effectiveness of interventions for APD in adults, addressing the urgent need for clarity in the field. Methods: Following the Preferred Reporting Items for Systematic reviews and Meta-Analyses (PRISMA) guidelines, we conducted a systematic search across MEDLINE (Ovid), Embase (Ovid), Web of Science and Scopus, focusing on intervention studies involving adults with APD. Studies that met the inclusion criteria were grouped according to intervention with a meta-analysis only conducted where intervention, study design and outcome measure were comparable. Results: Out of 1,618 screened records, 13 studies were included, covering auditory training (AT), low-gain hearing aids (LGHA), and personal remote microphone systems (PRMS). Our analysis revealed: AT, Mixed results with some improvements in speech intelligibility and listening ability, indicating potential benefits but highlighting the need for standardized protocols; LGHA, The included studies demonstrated significant improvements in monaural low redundancy speech testing (p < 0.05), suggesting LGHA could enhance speech perception in noisy environments. However, limitations include small sample sizes and potential biases in study design. PRMS, Demonstrated the most consistent evidence of benefit, significantly improving speech testing results, with no additional benefit from combining PRMS with other interventions. Discussion: PRMS presents the most evidence-supported intervention for adults with APD, although further high-quality research is crucial for all intervention types. The establishment and implementation of standardized intervention protocols alongside rigorously validated outcome measures will enable a more evidence-based approach to managing APD in adults.

2.
Healthcare (Basel) ; 12(12)2024 Jun 07.
Article in English | MEDLINE | ID: mdl-38921276

ABSTRACT

(1) Background: Auditory processing (AP) disorder is associated with learning difficulties and poses challenges to school-aged children in their daily activities. This scoping review identifies interventions and provides audiologists with protocol insights and outcome measures. (2) Methods: A systematic search of both peer-reviewed and grey literature (January 2006 to August 2023) covered ten databases. Studies included had the following characteristics: (i) published in French or English; (ii) participants were school-aged, and had a normal audiogram, AP difficulties or disorder, and no cognitive, developmental, congenital or neurological disorder (with the exception of learning, attention, and language disabilities); (iii) were intervention studies or systematic reviews. (3) Results: Forty-two studies were included, and they predominantly featured auditory training (AT), addressing spatial processing, dichotic listening, temporal processing and listening to speech in noise. Some interventions included cognitive or language training, assistive devices or hearing aids. Outcome measures listed included electrophysiological, AP, cognitive and language measures and questionnaires addressed to parents, teachers or the participants. (4) Conclusions: Most interventions focused on bottom-up approaches, particularly AT. A limited number of top-down approaches were observed. The compiled tools underscore the need for research on metric responsiveness and point to the inadequate consideration given to understanding how children perceive change.

3.
Biology (Basel) ; 13(6)2024 Jun 05.
Article in English | MEDLINE | ID: mdl-38927296

ABSTRACT

Understanding speech in noise is particularly difficult for individuals occupationally exposed to noise due to a mix of noise-induced auditory lesions and the energetic masking of speech signals. For years, the monitoring of conventional audiometric thresholds has been the usual method to check and preserve auditory function. Recently, suprathreshold deficits, notably, difficulties in understanding speech in noise, has pointed out the need for new monitoring tools. The present study aims to identify the most important variables that predict speech in noise understanding in order to suggest a new method of hearing status monitoring. Physiological (distortion products of otoacoustic emissions, electrocochleography) and behavioral (amplitude and frequency modulation detection thresholds, conventional and extended high-frequency audiometric thresholds) variables were collected in a population of individuals presenting a relatively homogeneous occupational noise exposure. Those variables were used as predictors in a statistical model (random forest) to predict the scores of three different speech-in-noise tests and a self-report of speech-in-noise ability. The extended high-frequency threshold appears to be the best predictor and therefore an interesting candidate for a new way of monitoring noise-exposed professionals.

4.
Trends Hear ; 28: 23312165241260029, 2024.
Article in English | MEDLINE | ID: mdl-38831646

ABSTRACT

The extent to which active noise cancelation (ANC), when combined with hearing assistance, can improve speech intelligibility in noise is not well understood. One possible source of benefit is ANC's ability to reduce the sound level of the direct (i.e., vent-transmitted) path. This reduction lowers the "floor" imposed by the direct path, thereby allowing any increases to the signal-to-noise ratio (SNR) created in the amplified path to be "realized" at the eardrum. Here we used a modeling approach to estimate this benefit. We compared pairs of simulated hearing aids that differ only in terms of their ability to provide ANC and computed intelligibility metrics on their outputs. The difference in metric scores between simulated devices is termed the "ANC Benefit." These simulations show that ANC Benefit increases as (1) the environmental sound level increases, (2) the ability of the hearing aid to improve SNR increases, (3) the strength of the ANC increases, and (4) the hearing loss severity decreases. The predicted size of the ANC Benefit can be substantial. For a moderate hearing loss, the model predicts improvement in intelligibility metrics of >30% when environments are moderately loud (>70 dB SPL) and devices are moderately capable of increasing SNR (by >4 dB). It appears that ANC can be a critical ingredient in hearing devices that attempt to improve SNR in loud environments. ANC will become more and more important as advanced SNR-improving algorithms (e.g., artificial intelligence speech enhancement) are included in hearing devices.


Subject(s)
Hearing Aids , Noise , Perceptual Masking , Signal-To-Noise Ratio , Speech Intelligibility , Speech Perception , Humans , Noise/adverse effects , Computer Simulation , Acoustic Stimulation , Correction of Hearing Impairment/instrumentation , Persons With Hearing Impairments/rehabilitation , Persons With Hearing Impairments/psychology , Hearing Loss/diagnosis , Hearing Loss/rehabilitation , Hearing Loss/physiopathology , Equipment Design , Signal Processing, Computer-Assisted
5.
Lang Speech ; : 238309241254350, 2024 Jun 10.
Article in English | MEDLINE | ID: mdl-38853599

ABSTRACT

Previous research has shown that it is difficult for English speakers to distinguish the front rounded vowels /y/ and /ø/ from the back rounded vowels /u/ and /o/. In this study, we examine the effect of noise on this perceptual difficulty. In an Oddity Discrimination Task, English speakers without any knowledge of German were asked to discriminate between German-sounding pseudowords varying in the vowel both in quiet and in white noise at two signal-to-noise ratios (8 and 0 dB). In test trials, vowels of the same height were contrasted with each other, whereas a contrast with /a/ served as a control trial. Results revealed that a contrast with /a/ remained stable in every listening condition for both high and mid vowels. When contrasting vowels of the same height, however, there was a perceptual shift along the F2 dimension as the noise level increased. Although the /ø/-/o/ and particularly /y/-/u/ contrasts were the most difficult in quiet, accuracy on /i/-/y/ and /e/-/ø/ trials decreased immensely when the speech signal was masked. The German control group showed the same pattern, albeit less severe than the non-native group, suggesting that even in low-level tasks with pseudowords, there is a native advantage in speech perception in noise.

6.
Sci Rep ; 14(1): 13089, 2024 06 07.
Article in English | MEDLINE | ID: mdl-38849415

ABSTRACT

Speech-in-noise (SIN) perception is a primary complaint of individuals with audiometric hearing loss. SIN performance varies drastically, even among individuals with normal hearing. The present genome-wide association study (GWAS) investigated the genetic basis of SIN deficits in individuals with self-reported normal hearing in quiet situations. GWAS was performed on 279,911 individuals from the UB Biobank cohort, with 58,847 reporting SIN deficits despite reporting normal hearing in quiet. GWAS identified 996 single nucleotide polymorphisms (SNPs), achieving significance (p < 5*10-8) across four genomic loci. 720 SNPs across 21 loci achieved suggestive significance (p < 10-6). GWAS signals were enriched in brain tissues, such as the anterior cingulate cortex, dorsolateral prefrontal cortex, entorhinal cortex, frontal cortex, hippocampus, and inferior temporal cortex. Cochlear cell types revealed no significant association with SIN deficits. SIN deficits were associated with various health traits, including neuropsychiatric, sensory, cognitive, metabolic, cardiovascular, and inflammatory conditions. A replication analysis was conducted on 242 healthy young adults. Self-reported speech perception, hearing thresholds (0.25-16 kHz), and distortion product otoacoustic emissions (1-16 kHz) were utilized for the replication analysis. 73 SNPs were replicated with a self-reported speech perception measure. 211 SNPs were replicated with at least one and 66 with at least two audiological measures. 12 SNPs near or within MAPT, GRM3, and HLA-DQA1 were replicated for all audiological measures. The present study highlighted a polygenic architecture underlying SIN deficits in individuals with self-reported normal hearing.


Subject(s)
Genome-Wide Association Study , Multifactorial Inheritance , Noise , Polymorphism, Single Nucleotide , Speech Perception , Humans , Male , Female , Speech Perception/genetics , Adult , Middle Aged , Self Report , Aged , Hearing/genetics , Young Adult
7.
Biology (Basel) ; 13(6)2024 May 23.
Article in English | MEDLINE | ID: mdl-38927251

ABSTRACT

Auditory temporal processing is a vital component of auditory stream segregation, or the process in which complex sounds are separated and organized into perceptually meaningful objects. Temporal processing can degrade prior to hearing loss, and is suggested to be a contributing factor to difficulties with speech-in-noise perception in normal-hearing listeners. The current study tested this hypothesis in middle-aged adults-an under-investigated cohort, despite being the age group where speech-in-noise difficulties are first reported. In 76 participants, three mechanisms of temporal processing were measured: peripheral auditory nerve function using electrocochleography, subcortical encoding of periodic speech cues (i.e., fundamental frequency; F0) using the frequency following response, and binaural sensitivity to temporal fine structure (TFS) using a dichotic frequency modulation detection task. Two measures of speech-in-noise perception were administered to explore how contributions of temporal processing may be mediated by different sensory demands present in the speech perception task. This study supported the hypothesis that temporal coding deficits contribute to speech-in-noise difficulties in middle-aged listeners. Poorer speech-in-noise perception was associated with weaker subcortical F0 encoding and binaural TFS sensitivity, but in different contexts, highlighting that diverse aspects of temporal processing are differentially utilized based on speech-in-noise task characteristics.

8.
Article in English | MEDLINE | ID: mdl-38908790

ABSTRACT

INTRODUCTION: Human beings are constantly exposed to complex acoustic environments every day, which even pose challenges for individuals with normal hearing. Speech perception relies not only on fixed elements within the acoustic wave but is also influenced by various factors. These factors include speech intensity, environmental noise, the presence of other speakers, individual specific characteristics, spatial separatios of sound sources, ambient reverberation, and audiovisual cues. The objective of this study is twofold: to determine the auditory capacity of normal hearing individuals to discriminate spoken words in real-life acoustic conditions and perform a phonetic analysis of misunderstood spoken words. MATERIALS AND METHODS: This is a descriptive observational cross-sectional study involving 20 normal hearing individuals. Verbal audiometry was conducted in an open-field environment, with sounds masked by simulated real-word acoustic environment at various sound intensity levels. To enhance sound emission, 2D visual images related to the sounds were displayed on a television. We analyzed the percentage of correct answers and performed a phonetic analysis of misunderstood Spanish bisyllabic words in each environment. RESULTS: 14 women (70%) and 6 men (30%), with an average age of 26 ±â€¯5,4 years and a mean airway hearing threshold in the right ear of 10,56 ±â€¯3,52 dB SPL and in the left ear of 10,12 ±â€¯2,49 dB SPL. The percentage of verbal discrimination in the "Ocean" sound environment was 97,2 ±â€¯5,04%, "Restaurant" was 94 ±â€¯4,58%, and "Traffic" was 86,2 ±â€¯9,94% (p = 0,000). Regarding the phonetic analysis, the allophones that exhibited statistically significant differences were as follows: [o] (p = 0,002) within the group of vocalic phonemes, [n] (p = 0,000) of voiced nasal consonants, [r] (p = 0,0016) of voiced fricatives, [b] (p = 0,000) and [g] (p = 0,045) of voiced stops. CONCLUSION: The dynamic properties of the acoustic environment can impact the ability of a normal hearing individual to extract information from a voice signal. Our study demonstrates that this ability decreases when the voice signal is masked by one or more simultaneous interfering voices, as observed in a "Restaurant" environment, and when it is masked by a continuous and intense noise environment such as "Traffic". Regarding the phonetic analysis, when the sound environment was composed of continuous-low frequency noise, we found that nasal consonants were particularly challenging to identify. Furthermore in situations with distracting verbal signals, vowels and vibrating consonants exhibited the worst intelligibility.

9.
Front Neurosci ; 18: 1379988, 2024.
Article in English | MEDLINE | ID: mdl-38784097

ABSTRACT

The prevalence of synthetic talking faces in both commercial and academic environments is increasing as the technology to generate them grows more powerful and available. While it has long been known that seeing the face of the talker improves human perception of speech-in-noise, recent studies have shown that synthetic talking faces generated by deep neural networks (DNNs) are also able to improve human perception of speech-in-noise. However, in previous studies the benefit provided by DNN synthetic faces was only about half that of real human talkers. We sought to determine whether synthetic talking faces generated by an alternative method would provide a greater perceptual benefit. The facial action coding system (FACS) is a comprehensive system for measuring visually discernible facial movements. Because the action units that comprise FACS are linked to specific muscle groups, synthetic talking faces generated by FACS might have greater verisimilitude than DNN synthetic faces which do not reference an explicit model of the facial musculature. We tested the ability of human observers to identity speech-in-noise accompanied by a blank screen; the real face of the talker; and synthetic talking faces generated either by DNN or FACS. We replicated previous findings of a large benefit for seeing the face of a real talker for speech-in-noise perception and a smaller benefit for DNN synthetic faces. FACS faces also improved perception, but only to the same degree as DNN faces. Analysis at the phoneme level showed that the performance of DNN and FACS faces was particularly poor for phonemes that involve interactions between the teeth and lips, such as /f/, /v/, and /th/. Inspection of single video frames revealed that the characteristic visual features for these phonemes were weak or absent in synthetic faces. Modeling the real vs. synthetic difference showed that increasing the realism of a few phonemes could substantially increase the overall perceptual benefit of synthetic faces.

10.
Trends Hear ; 28: 23312165241239541, 2024.
Article in English | MEDLINE | ID: mdl-38738337

ABSTRACT

Cochlear synaptopathy, a form of cochlear deafferentation, has been demonstrated in a number of animal species, including non-human primates. Both age and noise exposure contribute to synaptopathy in animal models, indicating that it may be a common type of auditory dysfunction in humans. Temporal bone and auditory physiological data suggest that age and occupational/military noise exposure also lead to synaptopathy in humans. The predicted perceptual consequences of synaptopathy include tinnitus, hyperacusis, and difficulty with speech-in-noise perception. However, confirming the perceptual impacts of this form of cochlear deafferentation presents a particular challenge because synaptopathy can only be confirmed through post-mortem temporal bone analysis and auditory perception is difficult to evaluate in animals. Animal data suggest that deafferentation leads to increased central gain, signs of tinnitus and abnormal loudness perception, and deficits in temporal processing and signal-in-noise detection. If equivalent changes occur in humans following deafferentation, this would be expected to increase the likelihood of developing tinnitus, hyperacusis, and difficulty with speech-in-noise perception. Physiological data from humans is consistent with the hypothesis that deafferentation is associated with increased central gain and a greater likelihood of tinnitus perception, while human data on the relationship between deafferentation and hyperacusis is extremely limited. Many human studies have investigated the relationship between physiological correlates of deafferentation and difficulty with speech-in-noise perception, with mixed findings. A non-linear relationship between deafferentation and speech perception may have contributed to the mixed results. When differences in sample characteristics and study measurements are considered, the findings may be more consistent.


Subject(s)
Cochlea , Speech Perception , Tinnitus , Humans , Cochlea/physiopathology , Tinnitus/physiopathology , Tinnitus/diagnosis , Animals , Speech Perception/physiology , Hyperacusis/physiopathology , Noise/adverse effects , Auditory Perception/physiology , Synapses/physiology , Hearing Loss, Noise-Induced/physiopathology , Hearing Loss, Noise-Induced/diagnosis , Loudness Perception
11.
Cogn Neurodyn ; 18(2): 371-382, 2024 Apr.
Article in English | MEDLINE | ID: mdl-38699619

ABSTRACT

Comprehending speech with the existence of background noise is of great importance for human life. In the past decades, a large number of psychological, cognitive and neuroscientific research has explored the neurocognitive mechanisms of speech-in-noise comprehension. However, as limited by the low ecological validity of the speech stimuli and the experimental paradigm, as well as the inadequate attention on the high-order linguistic and extralinguistic processes, there remains much unknown about how the brain processes noisy speech in real-life scenarios. A recently emerging approach, i.e., the second-person neuroscience approach, provides a novel conceptual framework. It measures both of the speaker's and the listener's neural activities, and estimates the speaker-listener neural coupling with regarding of the speaker's production-related neural activity as a standardized reference. The second-person approach not only promotes the use of naturalistic speech but also allows for free communication between speaker and listener as in a close-to-life context. In this review, we first briefly review the previous discoveries about how the brain processes speech in noise; then, we introduce the principles and advantages of the second-person neuroscience approach and discuss its implications to unravel the linguistic and extralinguistic processes during speech-in-noise comprehension; finally, we conclude by proposing some critical issues and calls for more research interests in the second-person approach, which would further extend the present knowledge about how people comprehend speech in noise.

12.
J Audiol Otol ; 2024 Apr 30.
Article in English | MEDLINE | ID: mdl-38685832

ABSTRACT

Background and Objectives: : This study aimed to develop and validate a modified version of the Speech in Noise Sentence Test in Kannada, which would be appropriate for testing the speech comprehension ability of children aged 8-12 years. Subjects and Methods: : A total of 120 sentences were chosen from 200 familiar sentences and split into four lists. Continuous discourse was used as a competition or distractor. Using MATLAB, the target stimulus was presented at 0-degree azimuth while the distractor's location varied (+90° and -90° azimuth). The test was programmed to dynamically adjust the signal-to-noise ratio (SNR) based on participants' responses. After initial validation, a pilot study was conducted with 60 typically hearing children aged 8 to 12 years. Results: : The SNR50 scores significantly improved when the distractor and target sentences were spatially separated across all groups. Age had a significant influence on the spatial separation scores. The test-retest reliability was excellent. Conclusions: : The developed stimuli effectively measured spatial separation, and the normative and psychometric analyses demonstrated reliable outcomes.

13.
Trends Hear ; 28: 23312165241245240, 2024.
Article in English | MEDLINE | ID: mdl-38613337

ABSTRACT

Listening to speech in noise can require substantial mental effort, even among younger normal-hearing adults. The task-evoked pupil response (TEPR) has been shown to track the increased effort exerted to recognize words or sentences in increasing noise. However, few studies have examined the trajectory of listening effort across longer, more natural, stretches of speech, or the extent to which expectations about upcoming listening difficulty modulate the TEPR. Seventeen younger normal-hearing adults listened to 60-s-long audiobook passages, repeated three times in a row, at two different signal-to-noise ratios (SNRs) while pupil size was recorded. There was a significant interaction between SNR, repetition, and baseline pupil size on sustained listening effort. At lower baseline pupil sizes, potentially reflecting lower attention mobilization, TEPRs were more sustained in the harder SNR condition, particularly when attention mobilization remained low by the third presentation. At intermediate baseline pupil sizes, differences between conditions were largely absent, suggesting these listeners had optimally mobilized their attention for both SNRs. Lastly, at higher baseline pupil sizes, potentially reflecting overmobilization of attention, the effect of SNR was initially reversed for the second and third presentations: participants initially appeared to disengage in the harder SNR condition, resulting in reduced TEPRs that recovered in the second half of the story. Together, these findings suggest that the unfolding of listening effort over time depends critically on the extent to which individuals have successfully mobilized their attention in anticipation of difficult listening conditions.


Subject(s)
Listening Effort , Pupil , Adult , Humans , Signal-To-Noise Ratio , Speech
14.
bioRxiv ; 2024 Apr 05.
Article in English | MEDLINE | ID: mdl-38617284

ABSTRACT

Our perceptual system bins elements of the speech signal into categories to make speech perception manageable. Here, we aimed to test whether hearing speech in categories (as opposed to a continuous/gradient fashion) affords yet another benefit to speech recognition: parsing noisy speech at the "cocktail party." We measured speech recognition in a simulated 3D cocktail party environment. We manipulated task difficulty by varying the number of additional maskers presented at other spatial locations in the horizontal soundfield (1-4 talkers) and via forward vs. time-reversed maskers, promoting more and less informational masking (IM), respectively. In separate tasks, we measured isolated phoneme categorization using two-alternative forced choice (2AFC) and visual analog scaling (VAS) tasks designed to promote more/less categorical hearing and thus test putative links between categorization and real-world speech-in-noise skills. We first show that listeners can only monitor up to ~3 talkers despite up to 5 in the soundscape and streaming is not related to extended high-frequency hearing thresholds (though QuickSIN scores are). We then confirm speech streaming accuracy and speed decline with additional competing talkers and amidst forward compared to reverse maskers with added IM. Dividing listeners into "discrete" vs. "continuous" categorizers based on their VAS labeling (i.e., whether responses were binary or continuous judgments), we then show the degree of IM experienced at the cocktail party is predicted by their degree of categoricity in phoneme labeling; more discrete listeners are less susceptible to IM than their gradient responding peers. Our results establish a link between speech categorization skills and cocktail party processing, with a categorical (rather than gradient) listening strategy benefiting degraded speech perception. These findings imply figure-ground deficits common in many disorders might arise through a surprisingly simple mechanism: a failure to properly bin sounds into categories.

15.
Int J Pediatr Otorhinolaryngol ; 180: 111928, 2024 May.
Article in English | MEDLINE | ID: mdl-38593717

ABSTRACT

OBJECTIVES: Communicating in noisy settings can be difficult due to interference and environmental noise, which can impact intelligibility for those with hearing impairments and those with normal hearing threshold. Speech intelligibility is commonly assessed in audiology through speech audiometry in quiet environments. Nevertheless, this test may not effectively assess hearing challenges in noisy environments, as total silence is rare in daily activities. A recently patented method, known as the SRT50 FAST, has been developed for conducting speech audiometry in noise. This new method enables the acceleration and simplification of free field speech audiometry tests involving competition noise. This study aims to establish normative scores and standardize the SRT50 FAST method as a test for evaluating speech perception in noise in pediatric patients. METHODS: The study included 30 participants with normal hearing, consisting of 11 females and 19 males, ranging in age from 6 to 11 years. A series of speech audiometry tests were conducted to determine the speech reception threshold 50% (SRT50) in competing conditions. This included testing both the fast mode (SRT50 FAST) currently being studied and the traditional method (SRT50 CLASSIC). The SRT50, or Signal to Noise Ratio (SNR) at which 50% of speech recognition occurred, was investigated for both methods. RESULTS: The mean SRT50 FAST test score was -2.69 (SD = 3.15). The dataset exhibited a normal distribution with values ranging from 3.60 to -8.60. Since the scores are expressed in SRT, higher scores indicate poorer performance. We have established a threshold of 3.60 as the upper limit of the normal range, therefore, patients with scores above this threshold are considered to have abnormal results. CONCLUSIONS: This study aimed to establish normative data for the evaluation of free field speech in noise recognition using the SRT50 FAST method in the pediatric population. This method accurately investigates the necessary signal-to-noise ratio for achieving 50% recognition scores with bisyllabic words in a quick manner. The ultimate objective is to employ this test to identify the optimal configuration of hearing rehabilitation devices, particularly for pediatric patients with hearing aids and/or cochlear implants. Additionally, it can be used to assess pediatric patients with unilateral hearing loss.


Subject(s)
Noise , Speech Perception , Humans , Male , Female , Child , Speech Perception/physiology , Reference Values , Speech Reception Threshold Test , Auditory Threshold/physiology , Audiometry, Speech/methods , Signal-To-Noise Ratio
16.
Audiol Res ; 14(2): 342-358, 2024 Apr 08.
Article in English | MEDLINE | ID: mdl-38666901

ABSTRACT

The aim of this study was to examine the relationship between an American English Digits in Noise (DIN) test and commonly used audiological measures to evaluate the DIN test's ability to detect hearing loss and validate hearing aid fitting. QuickSIN and DIN tests were completed by participants with untreated hearing loss (n = 46), prescription hearing aids (n = 15), and over-the-counter (OTC) hearing aids (n = 12). Performance on the QuickSIN showed moderate positive correlations with DIN for untreated hearing loss participants and prescription hearing aid users, but not for OTC hearing aid users. For untreated hearing loss participants, both QuickSIN and DIN tests showed positive moderate to strong correlations with high frequency puretone averages. In OTC users, DIN scores did not significantly change over a 6-month time period and were better when conducted remotely compared to in-person testing. Our results suggest that the DIN test may be a feasible monitoring option for individuals with hearing loss and those fitted with hearing aids. However, due to small sample size in this pilot study, future research is needed to examine DIN test's utility for fitting and validating OTC hearing aids.

18.
Trends Hear ; 28: 23312165231222098, 2024.
Article in English | MEDLINE | ID: mdl-38549287

ABSTRACT

This study measured electroencephalographic activity in the alpha band, often associated with task difficulty, to physiologically validate self-reported effort ratings from older hearing-impaired listeners performing the Repeat-Recall Test (RRT)-an integrative multipart assessment of speech-in-noise performance, context use, and auditory working memory. Following a single-blind within-subjects design, 16 older listeners (mean age = 71 years, SD = 13, 9 female) with a moderate-to-severe degree of bilateral sensorineural hearing loss performed the RRT while wearing hearing aids at four fixed signal-to-noise ratios (SNRs) of -5, 0, 5, and 10 dB. Performance and subjective ratings of listening effort were assessed for complementary versions of the RRT materials with high/low availability of semantic context. Listeners were also tested with a version of the RRT that omitted the memory (i.e., recall) component. As expected, results showed alpha power to decrease significantly with increasing SNR from 0 through 10 dB. When tested with high context sentences, alpha was significantly higher in conditions where listeners had to recall the sentence materials compared to conditions where the recall requirement was omitted. When tested with low context sentences, alpha power was relatively high irrespective of the memory component. Within-subjects, alpha power was related to listening effort ratings collected across the different RRT conditions. Overall, these results suggest that the multipart demands of the RRT modulate both neural and behavioral measures of listening effort in directions consistent with the expected/designed difficulty of the RRT conditions.


Subject(s)
Hearing Aids , Hearing Loss, Sensorineural , Speech Perception , Aged , Female , Humans , Hearing Loss, Sensorineural/therapy , Hearing Loss, Sensorineural/rehabilitation , Noise/adverse effects , Single-Blind Method , Male , Middle Aged , Aged, 80 and over
19.
Indian J Otolaryngol Head Neck Surg ; 76(1): 344-350, 2024 Feb.
Article in English | MEDLINE | ID: mdl-38440608

ABSTRACT

Hyperacusis is the perception of certain everyday sounds as too loud or painful. Past research suggests that some individuals with Sensory Processing Disorder (SPD) may also have a comorbid hyperacusis. The aim of this preliminary study was to explore if hyperacusis symptoms in children with SPD change following Speech in noise training (SPINT). This was a retrospective cross-sectional study. Data were included for 28 children with SPD and sound intolerance (12/28 were female, mean age was 8.7 ± 1.9 years old). Patients were assessed using the Persian Buffalo Model Questionnaire-Revised version (P-BMQ-R) that measures various behavioural aspects of auditory processing disorder and word in noise test (WINT) before and after SPINT. After SPINT the subscales of DEC, TFM with its Noi, and Mem, subcategories, APD, ΣCAP, and Gen of P-BMQ-R questionnaire significantly improved (P < 0.05), however, the changes in subscales of Var, INT and ORG were not statistically significant (P > 0.05). In addition, SPINT led to better performance in WINT in both ears (P < 0.05). This preliminary study showed promising result for the effect of SPINT on improving behavioural indicators of APD (as measured via P-BMQ-R and WINT) and decreasing hyperacusis symptoms (as measured via Noi).

20.
Trends Hear ; 28: 23312165241229057, 2024.
Article in English | MEDLINE | ID: mdl-38483979

ABSTRACT

A practical speech audiometry tool is the digits-in-noise (DIN) test for hearing screening of populations of varying ages and hearing status. The test is usually conducted by a human supervisor (e.g., clinician), who scores the responses spoken by the listener, or online, where software scores the responses entered by the listener. The test has 24-digit triplets presented in an adaptive staircase procedure, resulting in a speech reception threshold (SRT). We propose an alternative automated DIN test setup that can evaluate spoken responses whilst conducted without a human supervisor, using the open-source automatic speech recognition toolkit, Kaldi-NL. Thirty self-reported normal-hearing Dutch adults (19-64 years) completed one DIN + Kaldi-NL test. Their spoken responses were recorded and used for evaluating the transcript of decoded responses by Kaldi-NL. Study 1 evaluated the Kaldi-NL performance through its word error rate (WER), percentage of summed decoding errors regarding only digits found in the transcript compared to the total number of digits present in the spoken responses. Average WER across participants was 5.0% (range 0-48%, SD = 8.8%), with average decoding errors in three triplets per participant. Study 2 analyzed the effect that triplets with decoding errors from Kaldi-NL had on the DIN test output (SRT), using bootstrapping simulations. Previous research indicated 0.70 dB as the typical within-subject SRT variability for normal-hearing adults. Study 2 showed that up to four triplets with decoding errors produce SRT variations within this range, suggesting that our proposed setup could be feasible for clinical applications.


Subject(s)
Speech Perception , Adult , Humans , Speech Reception Threshold Test , Audiometry, Speech , Noise , Hearing Tests
SELECTION OF CITATIONS
SEARCH DETAIL
...