Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 203
Filter
1.
SAGE Open Med ; 12: 20503121241279230, 2024.
Article in English | MEDLINE | ID: mdl-39263638

ABSTRACT

Objectives: This scoping review aims to summarize and synthesize research findings on the disparities between audiometrically diagnosed and aided hearing loss versus the individual's own experience of hearing loss. Methods: A systematic search strategy was employed across multiple databases to identify studies published between 1990 and October 2023 focusing on the experiences of hearing problems among individuals with aided hearing loss. The selected studies underwent screening based on predetermined inclusion and exclusion criteria. These criteria revolved around including papers featuring a population of adult (+18) individuals with audiometrically measured hearing loss who had undergone technological rehabilitation. Data charting was employed to provide an overview of the studies and was additionally utilized to identify key themes. Narrative analysis was used to identify subthemes within the data set. Results: A total of 11 articles met the inclusion criteria. The analysis identified five themes: "disability experience and discrepancy between measured and self-perceived hearing loss"; "listening effort"; "mental burden/psychological consequences"; "factors that alleviate the consequences of HL"; and "sociodemographic factors." Conclusions: The scoping review shows that, despite the proliferation of technological options, there is a pressing need for a more concentrated effort to identify and scrutinize the supplementary facets of hearing loss that remain inadequately addressed by current hearing technology. This includes subjective experiences associated with hearing loss that may not be effectively treated solely with hearing aids.

2.
Hum Brain Mapp ; 45(13): e70023, 2024 Sep.
Article in English | MEDLINE | ID: mdl-39268584

ABSTRACT

The relationship between speech production and perception is a topic of ongoing debate. Some argue that there is little interaction between the two, while others claim they share representations and processes. One perspective suggests increased recruitment of the speech motor system in demanding listening situations to facilitate perception. However, uncertainties persist regarding the specific regions involved and the listening conditions influencing its engagement. This study used activation likelihood estimation in coordinate-based meta-analyses to investigate the neural overlap between speech production and three speech perception conditions: speech-in-noise, spectrally degraded speech and linguistically complex speech. Neural overlap was observed in the left frontal, insular and temporal regions. Key nodes included the left frontal operculum (FOC), left posterior lateral part of the inferior frontal gyrus (IFG), left planum temporale (PT), and left pre-supplementary motor area (pre-SMA). The left IFG activation was consistently observed during linguistic processing, suggesting sensitivity to the linguistic content of speech. In comparison, the left pre-SMA activation was observed when processing degraded and noisy signals, indicating sensitivity to signal quality. Activations of the left PT and FOC activation were noted in all conditions, with the posterior FOC area overlapping in all conditions. Our meta-analysis reveals context-independent (FOC, PT) and context-dependent (pre-SMA, posterior lateral IFG) regions within the speech motor system during challenging speech perception. These regions could contribute to sensorimotor integration and executive cognitive control for perception and production.


Subject(s)
Speech Perception , Speech , Humans , Speech Perception/physiology , Speech/physiology , Brain Mapping , Likelihood Functions , Motor Cortex/physiology , Cerebral Cortex/physiology , Cerebral Cortex/diagnostic imaging
3.
Trends Hear ; 28: 23312165241276435, 2024.
Article in English | MEDLINE | ID: mdl-39311635

ABSTRACT

In speech audiometry, the speech-recognition threshold (SRT) is usually established by adjusting the signal-to-noise ratio (SNR) until 50% of the words or sentences are repeated correctly. However, these conditions are rarely encountered in everyday situations. Therefore, for a group of 15 young participants with normal hearing and a group of 12 older participants with hearing impairment, speech-recognition scores were determined at SRT and at four higher SNRs using several stationary and fluctuating maskers. Participants' verbal responses were recorded, and participants were asked to self-report their listening effort on a categorical scale (self-reported listening effort, SR-LE). The responses were analyzed using an Automatic Speech Recognizer (ASR) and compared to the results of a human examiner. An intraclass correlation coefficient of r = .993 for the agreement between their corresponding speech-recognition scores was observed. As expected, speech-recognition scores increased with increasing SNR and decreased with increasing SR-LE. However, differences between speech-recognition scores for fluctuating and stationary maskers were observed as a function of SNR, but not as a function of SR-LE. The verbal response time (VRT) and the response speech rate (RSR) of the listeners' responses were measured using an ASR. The participants with hearing impairment showed significantly lower RSRs and higher VRTs compared to the participants with normal hearing. These differences may be attributed to differences in age, hearing, or both. With increasing SR-LE, VRT increased and RSR decreased. The results show the possibility of deriving a behavioral measure, VRT, measured directly from participants' verbal responses during speech audiometry, as a proxy for SR-LE.


Subject(s)
Acoustic Stimulation , Auditory Threshold , Perceptual Masking , Reaction Time , Speech Perception , Humans , Male , Female , Aged , Adult , Middle Aged , Young Adult , Case-Control Studies , Persons With Hearing Impairments/psychology , Persons With Hearing Impairments/rehabilitation , Self Report , Noise/adverse effects , Signal-To-Noise Ratio , Speech Reception Threshold Test , Speech Intelligibility , Hearing Loss/diagnosis , Hearing Loss/physiopathology , Age Factors , Time Factors , Hearing/physiology , Automation , Predictive Value of Tests
4.
Int J Pediatr Otorhinolaryngol ; 184: 112058, 2024 Sep.
Article in English | MEDLINE | ID: mdl-39116502

ABSTRACT

OBJECTIVES: The study's main objective was to compare Listening Effort (LE) in children with central auditory processing disorder [(C)APD] and typically developing children in quiet and at -2 and -6 dB SNR conditions. And to determine the relationship between LE and auditory processing abilities in children with (C)APD. METHODS: The study included 30 children (15 typically developing children and 15 diagnosed with (C)APD) aged 10-12 years. LE was measured using a dual-tasking paradigm. The primary task required the child to repeat the words. The second task required the child to click the mouse based on the image displayed on the laptop's screen. The primary task was done at quiet, -2 dB SNR, and -6 dB SNR conditions. LE was correlated with dichotic CV, duration pattern test, speech perception in noise - Indian English, and gap detection test in children with (C)APD. RESULTS: A mixed ANOVA was performed with LE in various conditions as the within-subject factor and group as the between-subject factor for both repetition and reaction time. The study found that LE repetition and reaction time had a significant main effect across conditions and groups. The correlation results revealed a significant relationship between LE reaction time with dichotic scores and GDT thresholds only at -2 dB SNR and -6 dB SNR conditions. There was no significant correlation between other auditory processing abilities and LE under different conditions, such as quiet, SPIN-IE, and DPT at -2 dB SNR and -6 dB SNR. CONCLUSION: The study emphasizes the importance of cognitive abilities for adequate listening comprehension in challenging situations. As a result, assessing LE in this population may provide additional information for developing therapeutic activities and assisting the child in overcoming listening difficulties.


Subject(s)
Auditory Perceptual Disorders , Humans , Child , Female , Male , Auditory Perceptual Disorders/diagnosis , Speech Perception/physiology , Auditory Perception/physiology , Case-Control Studies , Reaction Time/physiology , Dichotic Listening Tests
5.
Auris Nasus Larynx ; 51(5): 885-891, 2024 Oct.
Article in English | MEDLINE | ID: mdl-39197288

ABSTRACT

OBJECTIVE: People with hearing loss often encounter difficulties in hearing under adverse conditions, such as listening in the presence of noise. Listening effort is an indicator used to assess listening difficulties in daily life. Although many studies on listening effort have been conducted in recent years, there is a notable gap in the exploration of how task load influences listening effort in young adults. This study compared the effects of background noise and memory load on task performance and subjective listening effort in young adults with and without hearing loss. METHODS: The study included a group of 8 adults with hearing loss (mean age: 24.1 ± 6.0 years) and a group of 16 individuals with normal hearing (mean age: 27.9 ± 4.9 years). A number memorizing task was conducted, involving two types of auditory digits (either three or seven digits) presented under multi-talker babble noise conditions of signal-to-noise ratio of -5 dB [SN -5 dB] or SN +5 dB. Participants determined whether the number presented in the encoding interval matched the one presented in the retrieval interval. Subsequently, they were asked to complete a questionnaire using a Visual Analog Scale (VAS) to assess their subjective listening effort. Percentage of correct responses, reaction times, and VAS ratings were compared between adults with and without hearing loss. RESULTS: Our results showed significant differences between the two groups in the percentage of correct responses and the reaction time under the SN -5 dB conditions, regardless of the memory load. Under the SN +5 dB conditions, a significant difference was found only in the percentage of correct responses for seven digits. In the normal hearing group, the percentage of correct responses and VAS ratings tended to decrease as the memory load increased, even under the same noise condition. Conversely, in the hearing loss group, a consistent trend could not be identified in the effects of noise and memory load on the percentage of correct responses and VAS ratings. CONCLUSION: These results suggest that in conditions of high noise load, young adults with hearing loss show a higher tendency for listening effort to be affected by other loads. We confirmed that for some participants with hearing loss, the task exceeded a certain level of difficulty in the SN -5 dB and seven digits condition, leading to a change in their motivation and strategy used. Future research should examine ways to control for participants' motivations.


Subject(s)
Noise , Humans , Male , Adult , Female , Young Adult , Case-Control Studies , Memory/physiology , Reaction Time , Hearing Loss/psychology , Hearing Loss/physiopathology , Auditory Perception/physiology , Speech Perception/physiology , Signal-To-Noise Ratio , Task Performance and Analysis
6.
Trends Hear ; 28: 23312165241265199, 2024.
Article in English | MEDLINE | ID: mdl-39095047

ABSTRACT

Participation in complex listening situations such as group conversations in noisy environments sets high demands on the auditory system and on cognitive processing. Reports of hearing-impaired people indicate that strenuous listening situations occurring throughout the day lead to feelings of fatigue at the end of the day. The aim of the present study was to develop a suitable test sequence to evoke and measure listening effort (LE) and listening-related fatigue (LRF), and, to evaluate the influence of hearing aid use on both dimensions in mild to moderately hearing-impaired participants. The chosen approach aims to reconstruct a representative acoustic day (Time Compressed Acoustic Day [TCAD]) by means of an eight-part hearing-test sequence with a total duration of approximately 2½ h. For this purpose, the hearing test sequence combined four different listening tasks with five different acoustic scenarios and was presented to the 20 test subjects using virtual acoustics in an open field measurement in aided and unaided conditions. Besides subjective ratings of LE and LRF, behavioral measures (response accuracy, reaction times), and an attention test (d2-R) were performed prior to and after the TCAD. Furthermore, stress hormones were evaluated by taking salivary samples. Subjective ratings of LRF increased throughout the test sequence. This effect was observed to be higher when testing unaided. In three of the eight listening tests, the aided condition led to significantly faster reaction times/response accuracies than in the unaided condition. In the d2-R test, an interaction in processing speed between time (pre- vs. post-TCAD) and provision (unaided vs. aided) was found suggesting an influence of hearing aid provision on LRF. A comparison of the averaged subjective ratings at the beginning and end of the TCAD shows a significant increase in LRF for both conditions. At the end of the TCAD, subjective fatigue was significantly lower when wearing hearing aids. The analysis of stress hormones did not reveal significant effects.


Subject(s)
Acoustic Stimulation , Hearing Aids , Noise , Humans , Male , Female , Middle Aged , Aged , Noise/adverse effects , Correction of Hearing Impairment/instrumentation , Correction of Hearing Impairment/methods , Attention , Persons With Hearing Impairments/psychology , Persons With Hearing Impairments/rehabilitation , Adult , Auditory Fatigue , Time Factors , Reaction Time , Virtual Reality , Auditory Perception/physiology , Fatigue , Hearing Loss/psychology , Hearing Loss/rehabilitation , Hearing Loss/physiopathology , Hearing Loss/diagnosis , Speech Perception/physiology , Saliva/metabolism , Saliva/chemistry , Hearing , Auditory Threshold
7.
Curr Biol ; 34(18): 4114-4128.e6, 2024 Sep 23.
Article in English | MEDLINE | ID: mdl-39151432

ABSTRACT

Arousal and motivation interact to profoundly influence behavior. For example, experience tells us that we have some capacity to control our arousal when appropriately motivated, such as staying awake while driving a motor vehicle. However, little is known about how arousal and motivation jointly influence decision computations, including if and how animals, such as rodents, adapt their arousal state to their needs. Here, we developed and show results from an auditory, feature-based, sustained-attention task with intermittently shifting task utility. We use pupil size to estimate arousal across a wide range of states and apply tailored signal-detection theoretic, hazard function, and accumulation-to-bound modeling approaches in a large cohort of mice. We find that pupil-linked arousal and task utility both have major impacts on multiple aspects of task performance. Although substantial arousal fluctuations persist across utility conditions, mice partially stabilize their arousal near an intermediate and optimal level when task utility is high. Behavioral analyses show that multiple elements of behavior improve during high task utility and that arousal influences some, but not all, of them. Specifically, arousal influences the likelihood and timescale of sensory evidence accumulation but not the quantity of evidence accumulated per time step while attending. In sum, the results establish specific decision-computational signatures of arousal, motivation, and their interaction in attention. So doing, we provide an experimental and analysis framework for studying arousal self-regulation in neurotypical brains and in diseases such as attention-deficit/hyperactivity disorder.


Subject(s)
Arousal , Attention , Animals , Arousal/physiology , Attention/physiology , Mice , Male , Motivation , Pupil/physiology , Mice, Inbred C57BL , Female , Decision Making/physiology
8.
Front Neurosci ; 18: 1407775, 2024.
Article in English | MEDLINE | ID: mdl-39108313

ABSTRACT

Introduction: Noise reduction (NR) algorithms have been integrated into modern digital hearing aids to reduce noise annoyance and enhance speech intelligibility. This study aimed to evaluate the influences of a novel hearing aid NR algorithm on individuals with severe-to-profound hearing loss. Methods: Twenty-five participants with severe-to-profound bilateral sensorineural hearing loss underwent three tests (speech intelligibility, listening effort, and subjective sound quality in noise) to investigate the influences of NR. All three tests were performed under three NR strength levels (Off, Moderate, and Strong) for both speech in noise program (SpiN) and speech in loud noise program (SpiLN), comprising six different hearing aid conditions. Results: NR activation significantly reduced listening effort. Subjective sound quality assessments also exhibited benefits of activated NR in terms of noise suppression, listening comfort, satisfaction, and speech clarity. Discussion: Individuals with severe-to-profound hearing loss still experienced advantages from NR technology in both listening effort measure and subjective sound quality assessments. Importantly, these benefits did not adversely affect speech intelligibility.

9.
Trends Hear ; 28: 23312165241273346, 2024.
Article in English | MEDLINE | ID: mdl-39195628

ABSTRACT

There is broad consensus that listening effort is an important outcome for measuring hearing performance. However, there remains debate on the best ways to measure listening effort. This study sought to measure neural correlates of listening effort using functional near-infrared spectroscopy (fNIRS) in experienced adult hearing aid users. The study evaluated impacts of amplification and signal-to-noise ratio (SNR) on cerebral blood oxygenation, with the expectation that easier listening conditions would be associated with less oxygenation in the prefrontal cortex. Thirty experienced adult hearing aid users repeated sentence-final words from low-context Revised Speech Perception in Noise Test sentences. Participants repeated words at a hard SNR (individual SNR-50) or easy SNR (individual SNR-50 + 10 dB), while wearing hearing aids fit to prescriptive targets or without wearing hearing aids. In addition to assessing listening accuracy and subjective listening effort, prefrontal blood oxygenation was measured using fNIRS. As expected, easier listening conditions (i.e., easy SNR, with hearing aids) led to better listening accuracy, lower subjective listening effort, and lower oxygenation across the entire prefrontal cortex compared to harder listening conditions. Listening accuracy and subjective listening effort were also significant predictors of oxygenation.


Subject(s)
Hearing Aids , Spectroscopy, Near-Infrared , Speech Perception , Humans , Male , Female , Speech Perception/physiology , Aged , Middle Aged , Signal-To-Noise Ratio , Acoustic Stimulation/methods , Prefrontal Cortex/physiology , Persons With Hearing Impairments/psychology , Persons With Hearing Impairments/rehabilitation , Noise/adverse effects , Correction of Hearing Impairment/instrumentation , Correction of Hearing Impairment/methods , Adult , Aged, 80 and over , Hearing/physiology , Cerebrovascular Circulation/physiology , Auditory Threshold/physiology , Speech Intelligibility/physiology
10.
Int Arch Otorhinolaryngol ; 28(3): e460-e467, 2024 Jul.
Article in English | MEDLINE | ID: mdl-38974628

ABSTRACT

Introduction Permanent education in health aims to ensure that professionals are constantly learning in the workplace and in the last few years institutions resorted to the technology-mediated education modality and new teaching possibilities were explored. In Brazil, between 2017 and 2021, only six articles and five monographs were published about listening effort. Objective The objective of this study was to develop a website with scientific content on the topic listening effort for Speech -Language Therapist and Audiologist with free online access. Methods The study was carried out in five stages: Analysis, contemplating the search for scientific materials to prepare the material. Design, in which the writing and design of the website was carried out. Development, carrying out the adequacy of the online material. Implementation, a stage in which professionals in the area evaluated the quality of the material after consenting to participation through a free and informed consent term. Review, stage in which the researcher analyzed the evaluators' responses. Results The five stages of elaboration of the website were carried out, which was evaluated by professionals in the area. The average of responses to all applied questions rated the website as "superior". Conclusion The website development was validated for online availability.

11.
J Audiol Otol ; 2024 Jul 09.
Article in English | MEDLINE | ID: mdl-38973325

ABSTRACT

Background and Objectives: : Wireless streaming technology (WT), designed to transmit sounds directly from a mobile phone to hearing aids, was developed to enhance the signal-to-noise ratio. However, the advantages of WT during phone use and the specific demographic that can fully benefit from this technology has not been thoroughly evaluated. We aimed to investigate the benefits and identify predictive factors associated with bilateral wireless streaming among hearing aid users. Subjects and Methods: : Eighteen adults with symmetrical, bilateral hearing loss participated in the study. To assess the benefits of wireless streaming during phone use, researchers assessed sentence/word recognition and listening effort in two scenarios: a noisy background with WT turned "OFF" or "ON." Listening effort was evaluated through self-reported measurements. Cognitive function was also assessed using the Montreal Cognitive Assessment (MoCA) score. Results: : Participant mean age was 57.3 years (range 27-70), and the mean MoCA score was 27.0 (23-30). The activation of WT demonstrated a significant improvement in the sentence/word recognition test and reduced listening effort. The MoCA score showed a significant correlation with WT (ρ=0.59, p=0.01), suggesting a positive association between cognitive function and the benefits of WT. Conclusions: : Bilateral wireless streaming may enhance sentence/word recognition and reduce listening effort during phone use in hearing aid users, with these benefits potentially linked to cognitive function.

12.
Brain Behav ; 14(6): e3571, 2024 Jun.
Article in English | MEDLINE | ID: mdl-38841736

ABSTRACT

OBJECTIVE: This study aims to control all hearing thresholds, including extended high frequencies (EHFs), presents stimuli of varying difficulty levels, and measures electroencephalography (EEG) and pupillometry responses to determine whether listening difficulty in tinnitus patients is effort or fatigue-related. METHODS: Twenty-one chronic tinnitus patients and 26 matched healthy controls having normal pure-tone averages with symmetrical hearing thresholds were included. Subjects were evaluated with 0.125-20 kHz pure-tone audiometry, Montreal Cognitive Assessment Test (MoCA), Tinnitus Handicap Inventory (THI), EEG, and pupillometry. RESULTS: Pupil dilatation and EEG alpha power during the "encoding" phase of the presented sentence in tinnitus patients were less in all listening conditions (p < .05). Also, there was no statistically significant relationship between EEG and pupillometry components for all listening conditions and THI or MoCA (p > .05). CONCLUSION: EEG and pupillometry results under various listening conditions indicate potential listening effort in tinnitus patients even if all frequencies, including EHFs, are controlled. Also, we suggest that pupillometry should be interpreted with caution in autonomic nervous system-related conditions such as tinnitus.


Subject(s)
Electroencephalography , Pupil , Tinnitus , Humans , Tinnitus/physiopathology , Tinnitus/diagnosis , Male , Female , Electroencephalography/methods , Adult , Middle Aged , Pupil/physiology , Audiometry, Pure-Tone , Auditory Perception/physiology , Auditory Threshold/physiology
13.
Mem Cognit ; 2024 May 17.
Article in English | MEDLINE | ID: mdl-38758512

ABSTRACT

When speech is presented in noise, listeners must recruit cognitive resources to resolve the mismatch between the noisy input and representations in memory. A consequence of this effortful listening is impaired memory for content presented earlier. In the first study on effortful listening, Rabbitt, The Quarterly Journal of Experimental Psychology, 20, 241-248 (1968; Experiment 2) found that recall for a list of digits was poorer when subsequent digits were presented with masking noise than without. Experiment 3 of that study extended this effect to more naturalistic, passage-length materials. Although the findings of Rabbitt's Experiment 2 have been replicated multiple times, no work has assessed the robustness of Experiment 3. We conducted a replication attempt of Rabbitt's Experiment 3 at three signal-to-noise ratios (SNRs). Results at one of the SNRs (Experiment 1a of the current study) were in the opposite direction from what Rabbitt, The Quarterly Journal of Experimental Psychology, 20, 241-248, (1968) reported - that is, speech was recalled more accurately when it was followed by speech presented in noise rather than in the clear - and results at the other two SNRs showed no effect of noise (Experiments 1b and 1c). In addition, reanalysis of a replication of Rabbitt's seminal finding in his second experiment showed that the effect of effortful listening on previously presented information is transient. Thus, effortful listening caused by noise appears to only impair memory for information presented immediately before the noise, which may account for our finding that noise in the second-half of a long passage did not impair recall of information presented in the first half of the passage.

14.
Hear Res ; 448: 109031, 2024 07.
Article in English | MEDLINE | ID: mdl-38761554

ABSTRACT

In recent studies, psychophysiological measures have been used as markers of listening effort, but there is limited research on the effect of hearing loss on such measures. The aim of the current study was to investigate the effect of hearing acuity on physiological responses and subjective measures acquired during different levels of listening demand, and to investigate the relationship between these measures. A total of 125 participants (37 males and 88 females, age range 37-72 years, pure-tone average hearing thresholds at the best ear between -5.0 to 68.8 dB HL and asymmetry between ears between 0.0 and 87.5 dB) completed a listening task. A speech reception threshold (SRT) test was used with target sentences spoken by a female voice masked by male speech. Listening demand was manipulated using three levels of intelligibility: 20 % correct speech recognition, 50 %, and 80 % (IL20 %/IL50 %/IL80 %, respectively). During the task, peak pupil dilation (PPD), heart rate (HR), pre-ejection period (PEP), respiratory sinus arrhythmia (RSA), and skin conductance level (SCL) were measured. For each condition, subjective ratings of effort, performance, difficulty, and tendency to give up were also collected. Linear mixed effects models tested the effect of intelligibility level, hearing acuity, hearing asymmetry, and tinnitus complaints on the physiological reactivity (compared to baseline) and subjective measures. PPD and PEP reactivity showed a non-monotonic relationship with intelligibility level, but no such effects were found for HR, RSA, or SCL reactivity. Participants with worse hearing acuity had lower PPD at all intelligibility levels and showed lower PEP baseline levels. Additionally, PPD and SCL reactivity were lower for participants who reported suffering from tinnitus complaints. For IL80 %, but not IL50 % or IL20 %, participants with worse hearing acuity rated their listening effort to be relatively high compared to participants with better hearing. The reactivity of the different physiological measures were not or only weakly correlated with each other. Together, the results suggest that hearing acuity may be associated with altered sympathetic nervous system (re)activity. Research using psychophysiological measures as markers of listening effort to study the effect of hearing acuity on such measures are best served by the use of the PPD and PEP.


Subject(s)
Auditory Threshold , Hearing , Heart Rate , Speech Intelligibility , Speech Perception , Speech Reception Threshold Test , Humans , Male , Female , Middle Aged , Adult , Aged , Audiometry, Pure-Tone , Acoustic Stimulation , Perceptual Masking , Galvanic Skin Response , Pupil/physiology , Persons With Hearing Impairments/psychology
15.
Trends Hear ; 28: 23312165241246597, 2024.
Article in English | MEDLINE | ID: mdl-38629486

ABSTRACT

Hearing aids and other hearing devices should provide the user with a benefit, for example, compensate for effects of a hearing loss or cancel undesired sounds. However, wearing hearing devices can also have negative effects on perception, previously demonstrated mostly for spatial hearing, sound quality and the perception of the own voice. When hearing devices are set to transparency, that is, provide no gain and resemble open-ear listening as well as possible, these side effects can be studied in isolation. In the present work, we conducted a series of experiments that are concerned with the effect of transparent hearing devices on speech perception in a collocated speech-in-noise task. In such a situation, listening through a hearing device is not expected to have any negative effect, since both speech and noise undergo identical processing, such that the signal-to-noise ratio at ear is not altered and spatial effects are irrelevant. However, we found a consistent hearing device disadvantage for speech intelligibility and similar trends for rated listening effort. Several hypotheses for the possible origin for this disadvantage were tested by including several different devices, gain settings and stimulus levels. While effects of self-noise and nonlinear distortions were ruled out, the exact reason for a hearing device disadvantage on speech perception is still unclear. However, a significant relation to auditory model predictions demonstrate that the speech intelligibility disadvantage is related to sound quality, and is most probably caused by insufficient equalization, artifacts of frequency-dependent signal processing and processing delays.


Subject(s)
Hearing Aids , Hearing Loss , Speech Perception , Humans , Hearing , Noise/adverse effects
16.
Trends Hear ; 28: 23312165241232551, 2024.
Article in English | MEDLINE | ID: mdl-38549351

ABSTRACT

In daily life, both acoustic factors and social context can affect listening effort investment. In laboratory settings, information about listening effort has been deduced from pupil and cardiovascular responses independently. The extent to which these measures can jointly predict listening-related factors is unknown. Here we combined pupil and cardiovascular features to predict acoustic and contextual aspects of speech perception. Data were collected from 29 adults (mean  =  64.6 years, SD  =  9.2) with hearing loss. Participants performed a speech perception task at two individualized signal-to-noise ratios (corresponding to 50% and 80% of sentences correct) and in two social contexts (the presence and absence of two observers). Seven features were extracted per trial: baseline pupil size, peak pupil dilation, mean pupil dilation, interbeat interval, blood volume pulse amplitude, pre-ejection period and pulse arrival time. These features were used to train k-nearest neighbor classifiers to predict task demand, social context and sentence accuracy. The k-fold cross validation on the group-level data revealed above-chance classification accuracies: task demand, 64.4%; social context, 78.3%; and sentence accuracy, 55.1%. However, classification accuracies diminished when the classifiers were trained and tested on data from different participants. Individually trained classifiers (one per participant) performed better than group-level classifiers: 71.7% (SD  =  10.2) for task demand, 88.0% (SD  =  7.5) for social context, and 60.0% (SD  =  13.1) for sentence accuracy. We demonstrated that classifiers trained on group-level physiological data to predict aspects of speech perception generalized poorly to novel participants. Individually calibrated classifiers hold more promise for future applications.


Subject(s)
Pupil , Speech Perception , Humans , Pupil/physiology , Speech Intelligibility/physiology , Speech Perception/physiology , Middle Aged , Aged
17.
Trends Hear ; 28: 23312165231222098, 2024.
Article in English | MEDLINE | ID: mdl-38549287

ABSTRACT

This study measured electroencephalographic activity in the alpha band, often associated with task difficulty, to physiologically validate self-reported effort ratings from older hearing-impaired listeners performing the Repeat-Recall Test (RRT)-an integrative multipart assessment of speech-in-noise performance, context use, and auditory working memory. Following a single-blind within-subjects design, 16 older listeners (mean age = 71 years, SD = 13, 9 female) with a moderate-to-severe degree of bilateral sensorineural hearing loss performed the RRT while wearing hearing aids at four fixed signal-to-noise ratios (SNRs) of -5, 0, 5, and 10 dB. Performance and subjective ratings of listening effort were assessed for complementary versions of the RRT materials with high/low availability of semantic context. Listeners were also tested with a version of the RRT that omitted the memory (i.e., recall) component. As expected, results showed alpha power to decrease significantly with increasing SNR from 0 through 10 dB. When tested with high context sentences, alpha was significantly higher in conditions where listeners had to recall the sentence materials compared to conditions where the recall requirement was omitted. When tested with low context sentences, alpha power was relatively high irrespective of the memory component. Within-subjects, alpha power was related to listening effort ratings collected across the different RRT conditions. Overall, these results suggest that the multipart demands of the RRT modulate both neural and behavioral measures of listening effort in directions consistent with the expected/designed difficulty of the RRT conditions.


Subject(s)
Hearing Aids , Hearing Loss, Sensorineural , Speech Perception , Aged , Female , Humans , Hearing Loss, Sensorineural/therapy , Hearing Loss, Sensorineural/rehabilitation , Noise/adverse effects , Single-Blind Method , Male , Middle Aged , Aged, 80 and over
18.
Auris Nasus Larynx ; 51(3): 492-500, 2024 Jun.
Article in English | MEDLINE | ID: mdl-38522352

ABSTRACT

OBJECTIVES: This study investigated the effects of listening effort (LE) on balance in patients with compensated vestibular deficits compared to healthy peers. METHODS: The subjects included two main groups: a control group of 15 healthy subjects and a study group of 19 patients with compensated vestibular pathology. The computerized dynamic posturography test (CDP) was conducted without the speech-in-noise task as a baseline, then the participant was subjected to a dual task in which the auditory task (speech-in-noise sentences) was given as the primary task, and the balance function test was the secondary task. RESULTS: WITHIN-GROUP ANALYSIS: The study group showed statistically significantly worse values of all body balance parameters under dual-task than the baseline in all conditions. These differences were much higher under the compliant platform conditions. However, these findings were not statistically significant in the control group. BETWEEN-GROUP ANALYSIS: The study group showed a statistically significant decline in body balance reactions compared to the control group under dual-task with increased listening effort and the compliant platform. Study subgroup analysis revealed statistically significant differences between patients with unilateral vestibular loss (UVL) and those with bilateral vestibular loss (BVL) in the unstable platform condition. CONCLUSION: Our study regarding implementing a dual-tasking paradigm as a measure of LE during the evaluation of chronic vestibular patients with CDP demonstrated how dual-tasking with increased LE affects postural stability. Because of this, patients will probably be more prone to tripping and falling in multitasking situations, as found in real-world settings. This fact should be taken into consideration while testing patients with chronic vertigo and compensated states at VNG. A dual-task paradigm helps uncover the unrevealed pathology.


Subject(s)
Postural Balance , Vestibular Diseases , Humans , Postural Balance/physiology , Male , Female , Adult , Middle Aged , Vestibular Diseases/physiopathology , Case-Control Studies , Vestibular Function Tests , Speech Perception/physiology , Aged , Bilateral Vestibulopathy/physiopathology
19.
Logoped Phoniatr Vocol ; : 1-8, 2024 Mar 05.
Article in English | MEDLINE | ID: mdl-38440900

ABSTRACT

Understanding the impact of listening effort (LE) and fatigue has become increasingly crucial in optimizing the learning experience with the growing prevalence of online classrooms as a mode of instruction. The purpose of this study was to investigate the LE, fatigue, and voice quality experienced by students during online and face-to-face class sessions. A total of 110 participants with an average age of 20.76 (range 18-28) comprising first year undergraduate students in Speech and Language Therapy and Audiology programs in Turkey, rated their LE during the 2022-2023 spring semester using the Listening Effort Screening Questionnaire (LESQ) and assessed their fatigue with the Multidimensional Fatigue Inventory (MFI-20). Voice quality of lecturers was assessed using smoothed cepstral peak prominence (CPPS) measurements. Data were collected from both online and face-to-face sessions. The results revealed that participants reported increased LE and fatigue during online sessions compared to face-to-face sessions and the differences were statistically significant. Correlation analysis showed significant relationships (p < 0.05) between audio-video streaming quality and LE-related items in the LESQ, as well as MFI sub-scales and total scores. The findings revealed a relationship between an increased preference for face-to-face classrooms and higher levels of LE and fatigue, emphasizing the significance of these factors in shaping the learning experience. CPPS measurements indicated a dysphonic voice quality during online classroom audio streaming. These findings highlight the challenges of online classes in terms of increased LE, fatigue, and voice quality issues. Understanding these factors is crucial for improving online instruction and student experience.

20.
Int Tinnitus J ; 27(2): 97-103, 2024 Mar 21.
Article in English | MEDLINE | ID: mdl-38507621

ABSTRACT

OBJECTIVE: To describe an application's development and validation process that aims to track hearing difficulties in adverse environments (a listening effort application). DESIGN: 71 subjects were evaluated, divided into two groups: 30 subjects aged between 18 and 30, and 41 subjects aged between 40 and 65. All subjects had European Portuguese as their native language; the Montreal Cognitive Assessment (MOCA) scored above 24, and all could read and write. All subjects performed the intelligibility test in noise and the test of listening effort. The two tests were randomly applied in the free field in the audiometric cabin and the application. RESULTS: There were no statistically significant differences between the results of the two methods (p>0.05). For the group aged between 40 and 65 years old, the ROC curve showed that intelligibility inferior to 68.5% and the number of correct answers lower than 1,5 in the listening effort test are the optimal cut-off for referral to further management. Both tests showed low sensitivity and specificity regarding individuals between 18 and 30 years old, indicating that the application is inappropriate for this age group. CONCLUSIONS: The application is valid and can contribute to the screening and self-awareness of listening difficulties in middle age, with a reduction in the prevalence of dementia soon.


Subject(s)
Audiology , Mobile Applications , Speech Perception , Middle Aged , Humans , Adolescent , Adult , Aged , Young Adult , Listening Effort , Noise/prevention & control
SELECTION OF CITATIONS
SEARCH DETAIL