Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 19 de 19
Filter
2.
SoftwareX ; 172022 Jan.
Article in English | MEDLINE | ID: mdl-35465173

ABSTRACT

open Master Hearing Aid (openMHA) was developed and provided to the hearing aid research community as an open-source software platform with the aim to support sustainable and reproducible research towards improvement and new types of assistive hearing systems not limited by proprietary software. The software offers a flexible framework that allows the users to conduct hearing aid research using tools and a number of signal processing plugins provided with the software as well as the implementation of own methods. The openMHA software is independent of a specific hardware and supports Linux, macOS and Windows operating systems as well as 32-bit and 64-bit ARM-based architectures such as used in small portable integrated systems. www.openmha.org.

3.
Trends Hear ; 26: 23312165221078707, 2022.
Article in English | MEDLINE | ID: mdl-35341403

ABSTRACT

When listening to a sound source in everyday situations, typical movement behavior is highly individual and may not result in the listener directly facing the sound source. Behavioral differences can affect the performance of directional algorithms in hearing aids, as was shown in previous work by using head movement trajectories of normal-hearing (NH) listeners in acoustic simulations for noise-suppression performance predictions. However, the movement behavior of hearing-impaired (HI) listeners with or without hearing aids may differ, and hearing-aid users might adapt their self-motion to improve the performance of directional algorithms. This work investigates the influence of hearing impairment on self-motion, and the interaction of hearing aids with self-motion. In order to do this, the self-motion of three HI participant groups----aided with an adaptive differential microphone (ADM), aided without ADM, and unaided-was measured and compared to previously measured self-motion data from younger and older NH participants. Self-motion was measured in virtual audiovisual environments (VEs) in the laboratory, and the signal-to-noise ratios (SNRs) and SNR improvement of the ADM resulting from the head movements of the participants were estimated using acoustic simulations. HI participants did almost all of the movement with their head and less with their eyes compared to NH participants, which led to a 0.3 dB increase in estimated SNR and to differences in estimated SNR improvement of the ADM. However, the self-motion of the HI participants aided with ADM was similar to that of other HI participants, indicating that the ADM did not cause listeners to adapt their self-motion.


Subject(s)
Hearing Aids , Hearing Loss , Sound Localization , Speech Perception , Hearing Loss/diagnosis , Humans , Noise/adverse effects
4.
Int J Audiol ; 61(4): 311-321, 2022 04.
Article in English | MEDLINE | ID: mdl-34109902

ABSTRACT

OBJECTIVE: The aim was to create and validate an audiovisual version of the German matrix sentence test (MST), which uses the existing audio-only speech material. DESIGN: Video recordings were recorded and dubbed with the audio of the existing German MST. The current study evaluates the MST in conditions including audio and visual modalities, speech in quiet and noise, and open and closed-set response formats. SAMPLE: One female talker recorded repetitions of the German MST sentences. Twenty-eight young normal-hearing participants completed the evaluation study. RESULTS: The audiovisual benefit in quiet was 7.0 dB in sound pressure level (SPL). In noise, the audiovisual benefit was 4.9 dB in signal-to-noise ratio (SNR). Speechreading scores ranged from 0% to 84% speech reception in visual-only sentences (mean = 50%). Audiovisual speech reception thresholds (SRTs) had a larger standard deviation than audio-only SRTs. Audiovisual SRTs improved successively with increasing number of lists performed. The final video recordings are openly available. CONCLUSIONS: The video material achieved similar results as the literature in terms of gross speech intelligibility, despite the inherent asynchronies of dubbing. Due to ceiling effects, adaptive procedures targeting 80% intelligibility should be used. At least one or two training lists should be performed.


Subject(s)
Speech Perception , Female , Humans , Noise/adverse effects , Speech Intelligibility , Speech Reception Threshold Test/methods , Video Recording
5.
Ear Hear ; 41 Suppl 1: 48S-55S, 2020.
Article in English | MEDLINE | ID: mdl-33105259

ABSTRACT

The benefit from directional hearing devices predicted in the lab often differs from reported user experience, suggesting that laboratory findings lack ecological validity. This difference may be partly caused by differences in self-motion between the lab and real-life environments. This literature review aims to provide an overview of the methods used to measure and quantify self-motion, the test environments, and the measurement paradigms. Self-motion is the rotation and translation of the head and torso and movement of the eyes. Studies were considered which explicitly assessed or controlled self-motion within the scope of hearing and hearing device research. The methods and outcomes of the reviewed studies are compared and discussed in relation to ecological validity. The reviewed studies demonstrate interactions between hearing device benefit and self-motion, such as a decreased benefit from directional microphones due to a more natural head movement when the test environment and task include realistic complexity. Identified factors associated with these interactions include the presence of audiovisual cues in the environment, interaction with conversation partners, and the nature of the tasks being performed. This review indicates that although some aspects of the interactions between self-motion and hearing device benefit have been shown and many methods for assessment and analysis of self-motion are available, it is still unclear to what extent individual factors affect the ecological validity of the findings. Further research is required to relate lab-based measures of self-motion to the individual's real-life hearing ability.


Subject(s)
Hearing Aids , Speech Perception , Cues , Hearing , Humans , Motion
6.
Ear Hear ; 41 Suppl 1: 5S-19S, 2020.
Article in English | MEDLINE | ID: mdl-33105255

ABSTRACT

Ecological validity is a relatively new concept in hearing science. It has been cited as relevant with increasing frequency in publications over the past 20 years, but without any formal conceptual basis or clear motive. The sixth Eriksholm Workshop was convened to develop a deeper understanding of the concept for the purpose of applying it in hearing research in a consistent and productive manner. Inspired by relevant debate within the field of psychology, and taking into account the World Health Organization's International Classification of Functioning, Disability, and Health framework, the attendees at the workshop reached a consensus on the following definition: "In hearing science, ecological validity refers to the degree to which research findings reflect real-life hearing-related function, activity, or participation." Four broad purposes for striving for greater ecological validity in hearing research were determined: A (Understanding) better understanding the role of hearing in everyday life; B (Development) supporting the development of improved procedures and interventions; C (Assessment) facilitating improved methods for assessing and predicting ability to accomplish real-world tasks; and D (Integration and Individualization) enabling more integrated and individualized care. Discussions considered the effects of variables and phenomena commonly present in hearing-related research on the level of ecological validity of outcomes, supported by examples from a few selected outcome domains and for different types of studies. Illustrated with examples, potential strategies were offered for promoting a high level of ecological validity in a study and for how to evaluate the level of ecological validity of a study. Areas in particular that could benefit from more research to advance ecological validity in hearing science include: (1) understanding the processes of hearing and communication in everyday listening situations, and specifically the factors that make listening difficult in everyday situations; (2) developing new test paradigms that include more than one person (e.g., to encompass the interactive nature of everyday communication) and that are integrative of other factors that interact with hearing in real-life function; (3) integrating new and emerging technologies (e.g., virtual reality) with established test methods; and (4) identifying the key variables and phenomena affecting the level of ecological validity to develop verifiable ways to increase ecological validity and derive a set of benchmarks to strive for.


Subject(s)
Hearing Aids , Hearing , Auditory Perception , Comprehension , Humans , Research Design
7.
Ear Hear ; 41 Suppl 1: 31S-38S, 2020.
Article in English | MEDLINE | ID: mdl-33105257

ABSTRACT

To assess perception with and performance of modern and future hearing devices with advanced adaptive signal processing capabilities, novel evaluation methods are required that go beyond already established methods. These novel methods will simulate to a certain extent the complexity and variability of acoustic conditions and acoustic communication styles in real life. This article discusses the current state and the perspectives of virtual reality technology use in the lab for designing complex audiovisual communication environments for hearing assessment and hearing device design and evaluation. In an effort to increase the ecological validity of lab experiments, that is, to increase the degree to which lab data reflect real-life hearing-related function, and to support the development of improved hearing-related procedures and interventions, this virtual reality lab marks a transition from conventional (audio-only) lab experiments to the field. The first part of the article introduces and discusses the notion of the communication loop as a theoretical basis for understanding the factors that are relevant for acoustic communication in real life. From this, requirements are derived that allow an assessment of the extent to which a virtual reality lab reflects these factors, and which may be used as a proxy for ecological validity. The most important factor of real-life communication identified is a closed communication loop among the actively behaving participants. The second part of the article gives an overview of the current developments towards a virtual reality lab at Oldenburg University that aims at interactive and reproducible testing of subjects with and without hearing devices in challenging communication conditions. The extent to which the virtual reality lab in its current state meets the requirements defined in the first part is discussed, along with its limitations and potential further developments. Finally, data are presented from a qualitative study that compared subject behavior and performance in two audiovisual environments presented in the virtual reality lab-a street and a cafeteria-with the corresponding field environments. The results show similarities and differences in subject behavior and performance between the lab and the field, indicating that the virtual reality lab in its current state marks a step towards more ecological validity in lab-based hearing and hearing device research, but requires further development towards higher levels of ecological validity.


Subject(s)
Hearing Tests , User-Computer Interface , Virtual Reality , Acoustics , Comprehension , Humans , Sound
9.
Trends Hear ; 23: 2331216519872362, 2019.
Article in English | MEDLINE | ID: mdl-32516060

ABSTRACT

Recent achievements in hearing aid development, such as visually guided hearing aids, make it increasingly important to study movement behavior in everyday situations in order to develop test methods and evaluate hearing aid performance. In this work, audiovisual virtual environments (VEs) were designed for communication conditions in a living room, a lecture hall, a cafeteria, a train station, and a street environment. Movement behavior (head movement, gaze direction, and torso rotation) and electroencephalography signals were measured in these VEs in the laboratory for 22 younger normal-hearing participants and 19 older normal-hearing participants. These data establish a reference for future studies that will investigate the movement behavior of hearing-impaired listeners and hearing aid users for comparison. Questionnaires were used to evaluate the subjective experience in the VEs. A test-retest comparison showed that the measured movement behavior is reproducible and that the measures of movement behavior used in this study are reliable. Moreover, evaluation of the questionnaires indicated that the VEs are sufficiently realistic. The participants rated the experienced acoustic realism of the VEs positively, and although the rating of the experienced visual realism was lower, the participants felt to some extent present and involved in the VEs. Analysis of the movement data showed that movement behavior depends on the VE and the age of the subject and is predictable in multitalker conversations and for moving distractors. The VEs and a database of the collected data are publicly available.


Subject(s)
Acoustics , Movement/physiology , Speech Perception/physiology , Virtual Reality , Adult , Aged , Electroencephalography , Female , Healthy Volunteers , Hearing Aids , Hearing Loss/diagnosis , Hearing Tests/methods , Humans , Male , Middle Aged , Surveys and Questionnaires , Young Adult
10.
Audiol Res ; 8(2): 215, 2018 Oct 02.
Article in English | MEDLINE | ID: mdl-30581544

ABSTRACT

Hearing loss can negatively influence the spatial hearing abilities of hearing-impaired listeners, not only in static but also in dynamic auditory environments. Therefore, ways of addressing these deficits with advanced hearing aid algorithms need to be investigated. In a previous study based on virtual acoustics and a computer simulation of different bilateral hearing aid fittings, we investigated auditory source movement detectability in older hearing- impaired (OHI) listeners. We found that two directional processing algorithms could substantially improve the detectability of left-right and near-far source movements in the presence of reverberation and multiple interfering sounds. In the current study, we carried out similar measurements with a loudspeaker-based setup and wearable hearing aids. We fitted a group of 15 OHI listeners with bilateral behind-the-ear devices that were programmed to have three different directional processing settings. Apart from source movement detectability, we assessed two other aspects of spatial awareness perception. Using a street scene with up to five environmental sound sources, the participants had to count the number of presented sources or to indicate the movement direction of a single target signal. The data analyses showed a clear influence of the number of concurrent sound sources and the starting position of the moving target signal on the participants' performance, but no influence of the different hearing aid settings. Complementary artificial head recordings showed that the acoustic differences between the three hearing aid settings were rather small. Another explanation for the lack of effects of the tested hearing aid settings could be that the simulated street scenario was not sufficiently sensitive. Possible ways of improving the sensitivity of the laboratory measures while maintaining high ecological validity and complexity are discussed.

11.
Trends Hear ; 22: 2331216518779719, 2018.
Article in English | MEDLINE | ID: mdl-29900799

ABSTRACT

Hearing-impaired listeners are known to have difficulties not only with understanding speech in noise but also with judging source distance and movement, and these deficits are related to perceived handicap. It is possible that the perception of spatially dynamic sounds can be improved with hearing aids (HAs), but so far this has not been investigated. In a previous study, older hearing-impaired listeners showed poorer detectability for virtual left-right (angular) and near-far (radial) source movements due to lateral interfering sounds and reverberation, respectively. In the current study, potential ways of improving these deficits with HAs were explored. Using stimuli very similar to before, detailed acoustic analyses were carried out to examine the influence of different HA algorithms for suppressing noise and reverberation on the acoustic cues previously shown to be associated with source movement detectability. For an algorithm that combined unilateral directional microphones with binaural coherence-based noise reduction and for a bilateral beamformer with binaural cue preservation, movement-induced changes in spectral coloration, signal-to-noise ratio, and direct-to-reverberant energy ratio were greater compared with no HA processing. To evaluate these two algorithms perceptually, aided measurements of angular and radial source movement detectability were performed with 20 older hearing-impaired listeners. The analyses showed that, in the presence of concurrent interfering sounds and reverberation, the bilateral beamformer could restore source movement detectability in both spatial dimensions, whereas the other algorithm only improved detectability in the near-far dimension. Together, these results provide a basis for improving the detectability of spatially dynamic sounds with HAs.


Subject(s)
Acoustics , Algorithms , Hearing Aids , Hearing Loss/physiopathology , Movement , Noise/prevention & control , Sound Localization , Aged , Aged, 80 and over , Equipment Design , Female , Humans , Male , Middle Aged , Speech Perception
12.
Int J Audiol ; 57(sup3): S31-S42, 2018 06.
Article in English | MEDLINE | ID: mdl-29373937

ABSTRACT

OBJECTIVE: Binaural cues such as interaural level differences (ILDs) are used to organise auditory perception and to segregate sound sources in complex acoustical environments. In bilaterally fitted hearing aids, dynamic-range compression operating independently at each ear potentially alters these ILDs, thus distorting binaural perception and sound source segregation. DESIGN: A binaurally-linked model-based fast-acting dynamic compression algorithm designed to approximate the normal-hearing basilar membrane (BM) input-output function in hearing-impaired listeners is suggested. A multi-center evaluation in comparison with an alternative binaural and two bilateral fittings was performed to assess the effect of binaural synchronisation on (a) speech intelligibility and (b) perceived quality in realistic conditions. STUDY SAMPLE: 30 and 12 hearing impaired (HI) listeners were aided individually with the algorithms for both experimental parts, respectively. RESULTS: A small preference towards the proposed model-based algorithm in the direct quality comparison was found. However, no benefit of binaural-synchronisation regarding speech intelligibility was found, suggesting a dominant role of the better ear in all experimental conditions. CONCLUSION: The suggested binaural synchronisation of compression algorithms showed a limited effect on the tested outcome measures, however, linking could be situationally beneficial to preserve a natural binaural perception of the acoustical environment.


Subject(s)
Correction of Hearing Impairment/instrumentation , Hearing Aids , Hearing Loss, Sensorineural/rehabilitation , Hearing , Models, Theoretical , Persons With Hearing Impairments/rehabilitation , Signal Processing, Computer-Assisted , Speech Perception , Acoustic Stimulation , Aged , Aged, 80 and over , Auditory Threshold , Case-Control Studies , Cues , Equipment Design , Female , Germany , Hearing Loss, Sensorineural/diagnosis , Hearing Loss, Sensorineural/physiopathology , Hearing Loss, Sensorineural/psychology , Humans , Male , Middle Aged , Noise/adverse effects , Perceptual Masking , Persons With Hearing Impairments/psychology , Psychoacoustics , Sound Localization , Speech Intelligibility , Speech Reception Threshold Test
13.
Int J Audiol ; 57(sup3): S112-S117, 2018 06.
Article in English | MEDLINE | ID: mdl-27813439

ABSTRACT

OBJECTIVE: Create virtual acoustic environments (VAEs) with interactive dynamic rendering for applications in audiology. DESIGN: A toolbox for creation and rendering of dynamic virtual acoustic environments (TASCAR) that allows direct user interaction was developed for application in hearing aid research and audiology. The software architecture and the simulation methods used to produce VAEs are outlined. Example environments are described and analysed. CONCLUSION: With the proposed software, a tool for simulation of VAEs is available. A set of VAEs rendered with the proposed software was described.


Subject(s)
Acoustics , Auditory Perception , Correction of Hearing Impairment/instrumentation , Environment, Controlled , Hearing Aids , Hearing Loss/rehabilitation , Hearing , Persons With Hearing Impairments/rehabilitation , Virtual Reality , Acoustic Stimulation , Computer Simulation , Equipment Design , Hearing Loss/diagnosis , Hearing Loss/physiopathology , Hearing Loss/psychology , Hearing Tests , Humans , Materials Testing , Models, Theoretical , Noise/adverse effects , Perceptual Masking , Persons With Hearing Impairments/psychology , Psychoacoustics , Software
14.
Int J Audiol ; 57(sup3): S81-S91, 2018 06.
Article in English | MEDLINE | ID: mdl-28395561

ABSTRACT

OBJECTIVE: To investigate the influence of an algorithm designed to enhance or magnify interaural difference cues on speech signals in noisy, spatially complex conditions using both technical and perceptual measurements. To also investigate the combination of interaural magnification (IM), monaural microphone directionality (DIR), and binaural coherence-based noise reduction (BC). DESIGN: Speech-in-noise stimuli were generated using virtual acoustics. A computational model of binaural hearing was used to analyse the spatial effects of IM. Predicted speech quality changes and signal-to-noise-ratio (SNR) improvements were also considered. Additionally, a listening test was carried out to assess speech intelligibility and quality. STUDY SAMPLE: Listeners aged 65-79 years with and without sensorineural hearing loss (N = 10 each). RESULTS: IM increased the horizontal separation of concurrent directional sound sources without introducing any major artefacts. In situations with diffuse noise, however, the interaural difference cues were distorted. Preprocessing the binaural input signals with DIR reduced distortion. IM influenced neither speech intelligibility nor speech quality. CONCLUSIONS: The IM algorithm tested here failed to improve speech perception in noise, probably because of the dispersion and inconsistent magnification of interaural difference cues in complex environments.


Subject(s)
Acoustics , Correction of Hearing Impairment/instrumentation , Cues , Hearing Aids , Hearing Loss, Sensorineural/rehabilitation , Hearing , Models, Theoretical , Persons With Hearing Impairments/rehabilitation , Speech Perception , Acoustic Stimulation , Aged , Algorithms , Audiometry, Pure-Tone , Audiometry, Speech , Computer Simulation , Equipment Design , Female , Hearing Loss, Sensorineural/diagnosis , Hearing Loss, Sensorineural/physiopathology , Hearing Loss, Sensorineural/psychology , Humans , Male , Noise/adverse effects , Perceptual Masking , Persons With Hearing Impairments/psychology , Psychoacoustics , Signal Processing, Computer-Assisted , Speech Intelligibility
15.
Trends Hear ; 21: 2331216517717152, 2017.
Article in English | MEDLINE | ID: mdl-28675088

ABSTRACT

In contrast to static sounds, spatially dynamic sounds have received little attention in psychoacoustic research so far. This holds true especially for acoustically complex (reverberant, multisource) conditions and impaired hearing. The current study therefore investigated the influence of reverberation and the number of concurrent sound sources on source movement detection in young normal-hearing (YNH) and elderly hearing-impaired (EHI) listeners. A listening environment based on natural environmental sounds was simulated using virtual acoustics and rendered over headphones. Both near-far ('radial') and left-right ('angular') movements of a frontal target source were considered. The acoustic complexity was varied by adding static lateral distractor sound sources as well as reverberation. Acoustic analyses confirmed the expected changes in stimulus features that are thought to underlie radial and angular source movements under anechoic conditions and suggested a special role of monaural spectral changes under reverberant conditions. Analyses of the detection thresholds showed that, with the exception of the single-source scenarios, the EHI group was less sensitive to source movements than the YNH group, despite adequate stimulus audibility. Adding static sound sources clearly impaired the detectability of angular source movements for the EHI (but not the YNH) group. Reverberation, on the other hand, clearly impaired radial source movement detection for the EHI (but not the YNH) listeners. These results illustrate the feasibility of studying factors related to auditory movement perception with the help of the developed test setup.


Subject(s)
Acoustic Stimulation/methods , Acoustics , Hearing Disorders/psychology , Sound Localization , Adult , Age Factors , Aged , Aging/psychology , Auditory Threshold , Case-Control Studies , Cues , Feasibility Studies , Female , Hearing , Hearing Disorders/diagnosis , Hearing Disorders/physiopathology , Humans , Male , Motion , Signal Detection, Psychological , Signal Processing, Computer-Assisted , Sound , Sound Spectrography , Vibration , Young Adult
16.
J Am Acad Audiol ; 27(7): 557-66, 2016 Jul.
Article in English | MEDLINE | ID: mdl-27406662

ABSTRACT

BACKGROUND: Field tests and guided walks in real environments show that the benefit from hearing aid (HA) signal processing in real-life situations is typically lower than the predicted benefit found in laboratory studies. This suggests that laboratory test outcome measures are poor predictors of real-life HA benefits. However, a systematic evaluation of algorithms in the field is difficult due to the lack of reproducibility and control of the test conditions. Virtual acoustic environments that simulate real-life situations may allow for a systematic and reproducible evaluation of HAs under more realistic conditions, thus providing a better estimate of real-life benefit than established laboratory tests. PURPOSE: To quantify the difference in HA performance between a laboratory condition and more realistic conditions based on technical performance measures using virtual acoustic environments, and to identify the factors affecting HA performance across the tested environments. RESEARCH DESIGN: A set of typical HA beamformer algorithms was evaluated in virtual acoustic environments of different complexity. Performance was assessed based on established technical performance measures, including perceptual model predictions of speech quality and speech intelligibility. Virtual acoustic environments ranged from a simple static reference condition to more realistic complex scenes with dynamically moving sound objects. RESULTS: HA benefit, as predicted by signal-to-noise ratio (SNR) and speech intelligibility measures, differs between the reference condition and more realistic conditions for the tested beamformer algorithms. Other performance measures, such as speech quality or binaural degree of diffusiveness, do not show pronounced differences. However, a decreased speech quality was found in specific conditions. A correlation analysis showed a significant correlation between room acoustic parameters of the sound field and HA performance. The SNR improvement in the reference condition was found to be a poor predictor of HA performance in terms of speech intelligibility improvement in the more realistic conditions. CONCLUSIONS: Using several virtual acoustic environments of different complexity, a systematic difference in HA performance between a simple reference condition and more realistic environments was found, which may be related to the discrepancy between laboratory and real-life HA performance reported previously.


Subject(s)
Environment , Hearing Aids/standards , Speech Perception , Acoustics , Equipment Design , Humans , Noise , Reproducibility of Results , Signal-To-Noise Ratio
17.
Ear Hear ; 35(5): e213-27, 2014.
Article in English | MEDLINE | ID: mdl-25010636

ABSTRACT

OBJECTIVES: In a previous study, ) investigated whether pure-tone average (PTA) hearing loss and working memory capacity (WMC) modulate benefit from different binaural noise reduction (NR) settings. Results showed that listeners with smaller WMC preferred strong over moderate NR even at the expense of poorer speech recognition due to greater speech distortion (SD), whereas listeners with larger WMC did not. To enable a better understanding of these findings, the main aims of the present study were (1) to explore the perceptual consequences of changes to the signal mixture, target speech, and background noise caused by binaural NR, and (2) to determine whether response to these changes varies with WMC and PTA. DESIGN: As in the previous study, four age-matched groups of elderly listeners (with N = 10 per group) characterized by either mild or moderate PTAs and either better or worse performance on a visual measure of WMC participated. Five processing conditions were tested, which were based on the previously used (binaural coherence-based) NR scheme designed to attenuate diffuse signal components at mid to high frequencies. The five conditions differed in terms of the type of processing that was applied (no NR, strong NR, or strong NR with restoration of the long-term stimulus spectrum) and in terms of whether the target speech and background noise were processed in the same manner or whether one signal was left unprocessed while the other signal was processed with the gains computed for the signal mixture. Comparison across these conditions allowed assessing the effects of changes in high-frequency audibility (HFA), SD, and noise attenuation and distortion (NAD). Outcome measures included a dual-task paradigm combining speech recognition with a visual reaction time (VRT) task as well as ratings of perceived effort and overall preference. All measurements were carried out using headphone simulations of a frontal target speaker in a busy cafeteria. RESULTS: Relative to no NR, strong NR was found to impair speech recognition and VRT performance slightly and to improve perceived effort and overall preference markedly. Relative to strong NR, strong NR with restoration of the long-term stimulus spectrum and thus HFA did not affect speech recognition, restored VRT performance to that achievable with no NR, and increased perceived effort and reduced overall preference markedly. SD had negative effects on speech recognition and perceived effort, particularly when both speech and noise were processed with the gains computed for the signal mixture. NAD had positive effects on speech recognition, perceived effort, and overall preference, particularly when the target speech was left unprocessed. VRT performance was unaffected by SD and NAD. None of the datasets exhibited any clear signs that response to the different signal changes varies with PTA or WMC. CONCLUSIONS: For the outcome measures and stimuli applied here, the present study provides little evidence that PTA or WMC affect response to changes in HFA, SD, and NAD caused by binaural NR. However, statistical power restrictions suggest further research is needed. This research should also investigate whether partial HFA restoration combined with some pre-processing that reduces co-modulation distortion results in a more favorable balance of the effects of binaural NR across outcome dimensions and whether NR strength has any influence on these results.


Subject(s)
Hearing Aids , Hearing Loss, Sensorineural/physiopathology , Memory, Short-Term/physiology , Speech Perception/physiology , Aged , Aged, 80 and over , Audiometry, Pure-Tone , Hearing Loss, Sensorineural/rehabilitation , Humans , Middle Aged , Signal Detection, Psychological , Signal-To-Noise Ratio
18.
Ear Hear ; 35(3): e52-62, 2014.
Article in English | MEDLINE | ID: mdl-24351610

ABSTRACT

OBJECTIVES: Although previous research indicates that cognitive skills influence benefit from different types of hearing aid algorithms, comparatively little is known about the role of, and potential interaction with, hearing loss. This holds true especially for noise reduction (NR) processing. The purpose of the present study was thus to explore whether degree of hearing loss and cognitive function modulate benefit from different binaural NR settings based on measures of speech intelligibility, listening effort, and overall preference. DESIGN: Forty elderly listeners with symmetrical sensorineural hearing losses in the mild to severe range participated. They were stratified into four age-matched groups (with n = 10 per group) based on their pure-tone average hearing losses and their performance on a visual measure of working memory (WM) capacity. The algorithm under consideration was a binaural coherence-based NR scheme that suppressed reverberant signal components as well as diffuse background noise at mid to high frequencies. The strength of the applied processing was varied from inactive to strong, and testing was carried out across a range of fixed signal-to-noise ratios (SNRs). Potential benefit was assessed using a dual-task paradigm combining speech recognition with a visual reaction time (VRT) task indexing listening effort. Pairwise preference judgments were also collected. All measurements were made using headphone simulations of a frontal speech target in a busy cafeteria. Test-retest data were gathered for all outcome measures. RESULTS: Analysis of the test-retest data showed all data sets to be reliable. Analysis of the speech scores showed that, for all groups, speech recognition was unaffected by moderate NR processing, whereas strong NR processing reduced intelligibility by about 5%. Analysis of the VRT scores revealed a similar data pattern. That is, while moderate NR did not affect VRT performance, strong NR impaired the performance of all groups slightly. Analysis of the preference scores collapsed across SNR showed that all groups preferred some over no NR processing. Furthermore, the two groups with smaller WM capacity preferred strong over moderate NR processing; for the two groups with larger WM capacity, preference did not differ significantly between the moderate and strong settings. CONCLUSIONS: The present study demonstrates that, for the algorithm and the measures of speech recognition and listening effort used here, the effects of different NR settings interact with neither degree of hearing loss nor WM capacity. However, preferred NR strength was found to be associated with smaller WM capacity, suggesting that hearing aid users with poorer cognitive function may prefer greater noise attenuation even at the expense of poorer speech intelligibility. Further research is required to enable a more detailed (SNR-dependent) analysis of this effect and to test its wider applicability.


Subject(s)
Algorithms , Cognition , Hearing Aids , Hearing Loss, Sensorineural/rehabilitation , Memory, Short-Term , Aged , Aged, 80 and over , Audiometry, Pure-Tone , Female , Hearing Loss, Sensorineural/psychology , Humans , Male , Middle Aged , Pattern Recognition, Physiological , Reaction Time , Signal-To-Noise Ratio , Speech Perception , Treatment Outcome
19.
J Acoust Soc Am ; 127(3): 1491-505, 2010 Mar.
Article in English | MEDLINE | ID: mdl-20329849

ABSTRACT

In the framework of the European HearCom project, promising signal enhancement algorithms were developed and evaluated for future use in hearing instruments. To assess the algorithms' performance, five of the algorithms were selected and implemented on a common real-time hardware/software platform. Four test centers in Belgium, The Netherlands, Germany, and Switzerland perceptually evaluated the algorithms. Listening tests were performed with large numbers of normal-hearing and hearing-impaired subjects. Three perceptual measures were used: speech reception threshold (SRT), listening effort scaling, and preference rating. Tests were carried out in two types of rooms. Speech was presented in multitalker babble arriving from one or three loudspeakers. In a pseudo-diffuse noise scenario, only one algorithm, the spatially preprocessed speech-distortion-weighted multi-channel Wiener filtering, provided a SRT improvement relative to the unprocessed condition. Despite the general lack of improvement in SRT, some algorithms were preferred over the unprocessed condition at all tested signal-to-noise ratios (SNRs). These effects were found across different subject groups and test sites. The listening effort scores were less consistent over test sites. For the algorithms that did not affect speech intelligibility, a reduction in listening effort was observed at 0 dB SNR.


Subject(s)
Algorithms , Deafness/therapy , Hearing Aids , Models, Theoretical , Phonetics , Acoustic Stimulation , Environment , Hearing , Humans , Noise , Signal Processing, Computer-Assisted , Speech Perception
SELECTION OF CITATIONS
SEARCH DETAIL
...