Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 62
Filtrar
1.
Z Gerontol Geriatr ; 56(4): 283-289, 2023 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-37103645

RESUMO

BACKGROUND: Hearing aid technology has proven to be successful in the rehabilitation of hearing loss, but its performance is still limited in difficult everyday conditions characterized by noise and reverberation. OBJECTIVE: Introduction to the current state of hearing aid technology and presentation of the current state of research and future developments. METHODS: The current literature was analyzed and several specific new developments are presented. RESULTS: Both objective and subjective data from empirical studies show the limitations of the current technology. Examples of current research show the potential of machine learning-based algorithms and multimodal signal processing for improving speech processing and perception, of using virtual reality for improving hearing device fitting and of mobile health technology for improving hearing health services. CONCLUSION: Hearing device technology will remain a key factor in the rehabilitation of hearing impairments. New technology, such as machine learning and multimodal signal processing, virtual reality and mobile health technology, will improve speech enhancement, individual fitting and communication training, thus providing better support for all hearing-impaired patients, including older patients with disabilities or declining cognitive skills.


Assuntos
Auxiliares de Audição , Perda Auditiva , Percepção da Fala , Humanos , Auxiliares de Audição/psicologia , Perda Auditiva/diagnóstico , Ruído
2.
Int J Audiol ; : 1-10, 2022 Dec 13.
Artigo em Inglês | MEDLINE | ID: mdl-36512479

RESUMO

Objective: Distorted loudness perception is one of the main complaints of hearing aid users. Measuring loudness perception in the clinic as experienced in everyday listening situations is important for loudness-based hearing aid fitting. Little research has been done comparing loudness perception in the field and in the laboratory.Design: Participants rated the loudness in the field and in the laboratory of 36 driving actions. The field measurements were recorded with a 360° camera and a tetrahedral microphone. The recorded stimuli, which are openly accessible, were presented in three conditions in the laboratory: 360° video recordings with a head-mounted display, video recordings with a desktop monitor and audio-only.Study samples: Thirteen normal-hearing participants and 18 hearing-impaired participants with hearing aids.Results: The driving actions were rated as louder in the laboratory than in the field for the condition with a desktop monitor and for the audio-only condition. The less realistic a laboratory condition was, the more likely it was for a participant to rate a driving action as louder. The field-laboratory loudness differences were bigger for louder sounds.Conclusion: The results of this experiment indicate the importance of increasing realism and immersion when measuring loudness in the clinic.

3.
Trends Hear ; 26: 23312165221134378, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-36437739

RESUMO

Unhindered auditory and visual signals are essential for a sufficient speech understanding of cochlear implant (CI) users. Face masks are an important hygiene measurement against the COVID-19 virus but disrupt these signals. This study determinates the extent and the mechanisms of speech intelligibility alteration in CI users caused by different face masks. The audiovisual German matrix sentence test was used to determine speech reception thresholds (SRT) in noise in different conditions (audiovisual, audio-only, speechreading and masked audiovisual using two different face masks). Thirty-seven CI users and ten normal-hearing listeners (NH) were included. CI users showed a reduction in speech reception threshold of 5.0 dB due to surgical mask and 6.5 dB due to FFP2 mask compared to the audiovisual condition without mask. The greater proportion of reduction in SRT by mask could be accounted for by the loss of the visual signal (up to 4.5 dB). The effect of each mask was significantly larger in CI users who exclusively hear with their CI (surgical: 7.8 dB, p = 0.005 and FFP2: 8.7 dB, p = 0.01) compared to NH (surgical: 3.8 dB and FFP2: 5.1 dB). This study confirms that CI users who exclusively rely on their CI for hearing are particularly susceptible. Therefore, visual signals should be made accessible for communication whenever possible, especially when communicating with CI users.


Assuntos
COVID-19 , Implantes Cocleares , Percepção da Fala , Humanos , Máscaras/efeitos adversos , Pandemias , Inteligibilidade da Fala
4.
Front Neurosci ; 16: 904003, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-36117630

RESUMO

Recent advancements in neuroscientific research and miniaturized ear-electroencephalography (EEG) technologies have led to the idea of employing brain signals as additional input to hearing aid algorithms. The information acquired through EEG could potentially be used to control the audio signal processing of the hearing aid or to monitor communication-related physiological factors. In previous work, we implemented a research platform to develop methods that utilize EEG in combination with a hearing device. The setup combines currently available mobile EEG hardware and the so-called Portable Hearing Laboratory (PHL), which can fully replicate a complete hearing aid. Audio and EEG data are synchronized using the Lab Streaming Layer (LSL) framework. In this study, we evaluated the setup in three scenarios focusing particularly on the alignment of audio and EEG data. In Scenario I, we measured the latency between software event markers and actual audio playback of the PHL. In Scenario II, we measured the latency between an analog input signal and the sampled data stream of the EEG system. In Scenario III, we measured the latency in the whole setup as it would be used in a real EEG experiment. The results of Scenario I showed a jitter (standard deviation of trial latencies) of below 0.1 ms. The jitter in Scenarios II and III was around 3 ms in both cases. The results suggest that the increased jitter compared to Scenario I can be attributed to the EEG system. Overall, the findings show that the measurement setup can time-accurately present acoustic stimuli while generating LSL data streams over multiple hours of playback. Further, the setup can capture the audio and EEG LSL streams with sufficient temporal accuracy to extract event-related potentials from EEG signals. We conclude that our setup is suitable for studying closed-loop EEG & audio applications for future hearing aids.

6.
Trends Hear ; 26: 23312165221108257, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-35702051

RESUMO

A multi-talker paradigm is introduced that uses different attentional processes to adjust speech-recognition scores with the goal of conducting measurements at high signal-to-noise ratios (SNR). The basic idea is to simulate a group conversation with three talkers. Talkers alternately speak sentences of the German matrix test OLSA. Each time a sentence begins with the name "Kerstin" (call sign), the participant is addressed and instructed to repeat the last words of all sentences from that talker, until another talker begins a sentence with "Kerstin". The alternation of the talkers is implemented with an adjustable overlap time that causes an overlap between the call sign "Kerstin" and the target words to be repeated. Thus, the two tasks of detecting "Kerstin" and repeating target words are to be done at the same time. The paradigm was tested with 22 young normal-hearing participants (YNH) for three overlap times (0.6 s, 0.8 s, 1.0 s). Results for these overlap times show significant differences, with median target word recognition scores of 88%, 82%, and 77%, respectively (including call-sign and dual-task effects). A comparison of the dual task with the corresponding single tasks suggests that the observed effects reflect an increased cognitive load.


Assuntos
Percepção da Fala , Fala , Testes Auditivos , Humanos , Idioma , Razão Sinal-Ruído
7.
SoftwareX ; 172022 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-35465173

RESUMO

open Master Hearing Aid (openMHA) was developed and provided to the hearing aid research community as an open-source software platform with the aim to support sustainable and reproducible research towards improvement and new types of assistive hearing systems not limited by proprietary software. The software offers a flexible framework that allows the users to conduct hearing aid research using tools and a number of signal processing plugins provided with the software as well as the implementation of own methods. The openMHA software is independent of a specific hardware and supports Linux, macOS and Windows operating systems as well as 32-bit and 64-bit ARM-based architectures such as used in small portable integrated systems. www.openmha.org.

8.
J Acoust Soc Am ; 151(2): 712, 2022 02.
Artigo em Inglês | MEDLINE | ID: mdl-35232067

RESUMO

Humans are able to follow a speaker even in challenging acoustic conditions. The perceptual mechanisms underlying this ability remain unclear. A computational model of attentive voice tracking, consisting of four computational blocks: (1) sparse periodicity-based auditory features (sPAF) extraction, (2) foreground-background segregation, (3) state estimation, and (4) top-down knowledge, is presented. The model connects the theories about auditory glimpses, foreground-background segregation, and Bayesian inference. It is implemented with the sPAF, sequential Monte Carlo sampling, and probabilistic voice models. The model is evaluated by comparing it with the human data obtained in the study by Woods and McDermott [Curr. Biol. 25(17), 2238-2246 (2015)], which measured the ability to track one of two competing voices with time-varying parameters [fundamental frequency (F0) and formants (F1,F2)]. Three model versions were tested, which differ in the type of information used for the segregation: version (a) uses the oracle F0, version (b) uses the estimated F0, and version (c) uses the spectral shape derived from the estimated F0 and oracle F1 and F2. Version (a) simulates the optimal human performance in conditions with the largest separation between the voices, version (b) simulates the conditions in which the separation in not sufficient to follow the voices, and version (c) is closest to the human performance for moderate voice separation.


Assuntos
Percepção da Fala , Voz , Acústica , Teorema de Bayes , Simulação por Computador , Humanos , Periodicidade , Acústica da Fala
9.
Trends Hear ; 26: 23312165221078707, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-35341403

RESUMO

When listening to a sound source in everyday situations, typical movement behavior is highly individual and may not result in the listener directly facing the sound source. Behavioral differences can affect the performance of directional algorithms in hearing aids, as was shown in previous work by using head movement trajectories of normal-hearing (NH) listeners in acoustic simulations for noise-suppression performance predictions. However, the movement behavior of hearing-impaired (HI) listeners with or without hearing aids may differ, and hearing-aid users might adapt their self-motion to improve the performance of directional algorithms. This work investigates the influence of hearing impairment on self-motion, and the interaction of hearing aids with self-motion. In order to do this, the self-motion of three HI participant groups----aided with an adaptive differential microphone (ADM), aided without ADM, and unaided-was measured and compared to previously measured self-motion data from younger and older NH participants. Self-motion was measured in virtual audiovisual environments (VEs) in the laboratory, and the signal-to-noise ratios (SNRs) and SNR improvement of the ADM resulting from the head movements of the participants were estimated using acoustic simulations. HI participants did almost all of the movement with their head and less with their eyes compared to NH participants, which led to a 0.3 dB increase in estimated SNR and to differences in estimated SNR improvement of the ADM. However, the self-motion of the HI participants aided with ADM was similar to that of other HI participants, indicating that the ADM did not cause listeners to adapt their self-motion.


Assuntos
Auxiliares de Audição , Perda Auditiva , Localização de Som , Percepção da Fala , Perda Auditiva/diagnóstico , Humanos , Ruído/efeitos adversos
10.
Otol Neurotol ; 43(3): 282-288, 2022 03 01.
Artigo em Inglês | MEDLINE | ID: mdl-34999618

RESUMO

OBJECTIVE: To investigate the effects of wearing a simulated mask on speech perception of normal-hearing subjects. STUDY DESIGN: Prospective cohort study. SETTING: University hospital. PATIENTS: Fifteen normal-hearing, native German speakers (8 female, 7 male). INTERVENTION: Different experimental conditions with and without simulated face masks using the audiovisual version of the female German Matrix test (Oldenburger Satztest, OLSA). MAIN OUTCOME MEASURES: Signal-to-noise ratio (SNR) at speech intelligibility of 80%. RESULTS: The SNR at which 80% speech intelligibility was achieved deteriorated by a mean of 4.1 dB SNR when simulating a medical mask and by 5.1 dB SNR when simulating a cloth mask in comparison to the audiovisual condition without mask. Interestingly, the contribution of the visual component alone was 2.6 dB SNR and thus had a larger effect than the acoustic component in the medical mask condition. CONCLUSIONS: As expected, speech understanding with face masks was significantly worse than under control conditions. Thus, the speaker's use of face masks leads to a significant deterioration of speech understanding by the normal-hearing listener. The data suggest that these effects may play a role in many everyday situations that typically involve noise.


Assuntos
Máscaras , Percepção da Fala , Feminino , Audição , Humanos , Masculino , Estudos Prospectivos , Inteligibilidade da Fala
11.
Int J Audiol ; 61(4): 311-321, 2022 04.
Artigo em Inglês | MEDLINE | ID: mdl-34109902

RESUMO

OBJECTIVE: The aim was to create and validate an audiovisual version of the German matrix sentence test (MST), which uses the existing audio-only speech material. DESIGN: Video recordings were recorded and dubbed with the audio of the existing German MST. The current study evaluates the MST in conditions including audio and visual modalities, speech in quiet and noise, and open and closed-set response formats. SAMPLE: One female talker recorded repetitions of the German MST sentences. Twenty-eight young normal-hearing participants completed the evaluation study. RESULTS: The audiovisual benefit in quiet was 7.0 dB in sound pressure level (SPL). In noise, the audiovisual benefit was 4.9 dB in signal-to-noise ratio (SNR). Speechreading scores ranged from 0% to 84% speech reception in visual-only sentences (mean = 50%). Audiovisual speech reception thresholds (SRTs) had a larger standard deviation than audio-only SRTs. Audiovisual SRTs improved successively with increasing number of lists performed. The final video recordings are openly available. CONCLUSIONS: The video material achieved similar results as the literature in terms of gross speech intelligibility, despite the inherent asynchronies of dubbing. Due to ceiling effects, adaptive procedures targeting 80% intelligibility should be used. At least one or two training lists should be performed.


Assuntos
Percepção da Fala , Feminino , Humanos , Ruído/efeitos adversos , Inteligibilidade da Fala , Teste do Limiar de Recepção da Fala/métodos , Gravação em Vídeo
12.
Ear Hear ; 41 Suppl 1: 48S-55S, 2020.
Artigo em Inglês | MEDLINE | ID: mdl-33105259

RESUMO

The benefit from directional hearing devices predicted in the lab often differs from reported user experience, suggesting that laboratory findings lack ecological validity. This difference may be partly caused by differences in self-motion between the lab and real-life environments. This literature review aims to provide an overview of the methods used to measure and quantify self-motion, the test environments, and the measurement paradigms. Self-motion is the rotation and translation of the head and torso and movement of the eyes. Studies were considered which explicitly assessed or controlled self-motion within the scope of hearing and hearing device research. The methods and outcomes of the reviewed studies are compared and discussed in relation to ecological validity. The reviewed studies demonstrate interactions between hearing device benefit and self-motion, such as a decreased benefit from directional microphones due to a more natural head movement when the test environment and task include realistic complexity. Identified factors associated with these interactions include the presence of audiovisual cues in the environment, interaction with conversation partners, and the nature of the tasks being performed. This review indicates that although some aspects of the interactions between self-motion and hearing device benefit have been shown and many methods for assessment and analysis of self-motion are available, it is still unclear to what extent individual factors affect the ecological validity of the findings. Further research is required to relate lab-based measures of self-motion to the individual's real-life hearing ability.


Assuntos
Auxiliares de Audição , Percepção da Fala , Sinais (Psicologia) , Audição , Humanos , Movimento (Física)
13.
Ear Hear ; 41 Suppl 1: 5S-19S, 2020.
Artigo em Inglês | MEDLINE | ID: mdl-33105255

RESUMO

Ecological validity is a relatively new concept in hearing science. It has been cited as relevant with increasing frequency in publications over the past 20 years, but without any formal conceptual basis or clear motive. The sixth Eriksholm Workshop was convened to develop a deeper understanding of the concept for the purpose of applying it in hearing research in a consistent and productive manner. Inspired by relevant debate within the field of psychology, and taking into account the World Health Organization's International Classification of Functioning, Disability, and Health framework, the attendees at the workshop reached a consensus on the following definition: "In hearing science, ecological validity refers to the degree to which research findings reflect real-life hearing-related function, activity, or participation." Four broad purposes for striving for greater ecological validity in hearing research were determined: A (Understanding) better understanding the role of hearing in everyday life; B (Development) supporting the development of improved procedures and interventions; C (Assessment) facilitating improved methods for assessing and predicting ability to accomplish real-world tasks; and D (Integration and Individualization) enabling more integrated and individualized care. Discussions considered the effects of variables and phenomena commonly present in hearing-related research on the level of ecological validity of outcomes, supported by examples from a few selected outcome domains and for different types of studies. Illustrated with examples, potential strategies were offered for promoting a high level of ecological validity in a study and for how to evaluate the level of ecological validity of a study. Areas in particular that could benefit from more research to advance ecological validity in hearing science include: (1) understanding the processes of hearing and communication in everyday listening situations, and specifically the factors that make listening difficult in everyday situations; (2) developing new test paradigms that include more than one person (e.g., to encompass the interactive nature of everyday communication) and that are integrative of other factors that interact with hearing in real-life function; (3) integrating new and emerging technologies (e.g., virtual reality) with established test methods; and (4) identifying the key variables and phenomena affecting the level of ecological validity to develop verifiable ways to increase ecological validity and derive a set of benchmarks to strive for.


Assuntos
Auxiliares de Audição , Audição , Percepção Auditiva , Compreensão , Humanos , Projetos de Pesquisa
14.
Ear Hear ; 41 Suppl 1: 31S-38S, 2020.
Artigo em Inglês | MEDLINE | ID: mdl-33105257

RESUMO

To assess perception with and performance of modern and future hearing devices with advanced adaptive signal processing capabilities, novel evaluation methods are required that go beyond already established methods. These novel methods will simulate to a certain extent the complexity and variability of acoustic conditions and acoustic communication styles in real life. This article discusses the current state and the perspectives of virtual reality technology use in the lab for designing complex audiovisual communication environments for hearing assessment and hearing device design and evaluation. In an effort to increase the ecological validity of lab experiments, that is, to increase the degree to which lab data reflect real-life hearing-related function, and to support the development of improved hearing-related procedures and interventions, this virtual reality lab marks a transition from conventional (audio-only) lab experiments to the field. The first part of the article introduces and discusses the notion of the communication loop as a theoretical basis for understanding the factors that are relevant for acoustic communication in real life. From this, requirements are derived that allow an assessment of the extent to which a virtual reality lab reflects these factors, and which may be used as a proxy for ecological validity. The most important factor of real-life communication identified is a closed communication loop among the actively behaving participants. The second part of the article gives an overview of the current developments towards a virtual reality lab at Oldenburg University that aims at interactive and reproducible testing of subjects with and without hearing devices in challenging communication conditions. The extent to which the virtual reality lab in its current state meets the requirements defined in the first part is discussed, along with its limitations and potential further developments. Finally, data are presented from a qualitative study that compared subject behavior and performance in two audiovisual environments presented in the virtual reality lab-a street and a cafeteria-with the corresponding field environments. The results show similarities and differences in subject behavior and performance between the lab and the field, indicating that the virtual reality lab in its current state marks a step towards more ecological validity in lab-based hearing and hearing device research, but requires further development towards higher levels of ecological validity.


Assuntos
Testes Auditivos , Interface Usuário-Computador , Realidade Virtual , Acústica , Compreensão , Humanos , Som
15.
Trends Hear ; 24: 2331216520945826, 2020.
Artigo em Inglês | MEDLINE | ID: mdl-32895034

RESUMO

It is well known that hearing loss compromises auditory scene analysis abilities, as is usually manifested in difficulties of understanding speech in noise. Remarkably little is known about auditory scene analysis of hearing-impaired (HI) listeners when it comes to musical sounds. Specifically, it is unclear to which extent HI listeners are able to hear out a melody or an instrument from a musical mixture. Here, we tested a group of younger normal-hearing (yNH) and older HI (oHI) listeners with moderate hearing loss in their ability to match short melodies and instruments presented as part of mixtures. Four-tone sequences were used in conjunction with a simple musical accompaniment that acted as a masker (cello/piano dyads or spectrally matched noise). In each trial, a signal-masker mixture was presented, followed by two different versions of the signal alone. Listeners indicated which signal version was part of the mixture. Signal versions differed either in terms of the sequential order of the pitch sequence or in terms of timbre (flute vs. trumpet). Signal-to-masker thresholds were measured by varying the signal presentation level in an adaptive two-down/one-up procedure. We observed that thresholds of oHI listeners were elevated by on average 10 dB compared with that of yNH listeners. In contrast to yNH listeners, oHI listeners did not show evidence of listening in dips of the masker. Musical training of participants was associated with a lowering of thresholds. These results may indicate detrimental effects of hearing loss on central aspects of musical scene perception.


Assuntos
Perda Auditiva , Música , Percepção da Fala , Percepção Auditiva , Limiar Auditivo , Audição , Perda Auditiva/diagnóstico , Testes Auditivos , Humanos
17.
Eur J Neurosci ; 51(5): 1353-1363, 2020 03.
Artigo em Inglês | MEDLINE | ID: mdl-29855099

RESUMO

Human listeners robustly decode speech information from a talker of interest that is embedded in a mixture of spatially distributed interferers. A relevant question is which time-frequency segments of the speech are predominantly used by a listener to solve such a complex Auditory Scene Analysis task. A recent psychoacoustic study investigated the relevance of low signal-to-noise ratio (SNR) components of a target signal on speech intelligibility in a spatial multitalker situation. For this, a three-talker stimulus was manipulated in the spectro-temporal domain such that target speech time-frequency units below a variable SNR threshold (SNRcrit ) were discarded while keeping the interferers unchanged. The psychoacoustic data indicate that only target components at and above a local SNR of about 0 dB contribute to intelligibility. This study applies an auditory scene analysis "glimpsing" model to the same manipulated stimuli. Model data are found to be similar to the human data, supporting the notion of "glimpsing," that is, that salient speech-related information is predominantly used by the auditory system to decode speech embedded in a mixture of sounds, at least for the tested conditions of three overlapping speech signals. This implies that perceptually relevant auditory information is sparse and may be processed with low computational effort, which is relevant for neurophysiological research of scene analysis and novelty processing in the auditory system.


Assuntos
Percepção da Fala , Estimulação Acústica , Limiar Auditivo , Humanos , Mascaramento Perceptivo , Psicoacústica , Razão Sinal-Ruído , Som , Inteligibilidade da Fala
18.
Trends Hear ; 23: 2331216519872362, 2019.
Artigo em Inglês | MEDLINE | ID: mdl-32516060

RESUMO

Recent achievements in hearing aid development, such as visually guided hearing aids, make it increasingly important to study movement behavior in everyday situations in order to develop test methods and evaluate hearing aid performance. In this work, audiovisual virtual environments (VEs) were designed for communication conditions in a living room, a lecture hall, a cafeteria, a train station, and a street environment. Movement behavior (head movement, gaze direction, and torso rotation) and electroencephalography signals were measured in these VEs in the laboratory for 22 younger normal-hearing participants and 19 older normal-hearing participants. These data establish a reference for future studies that will investigate the movement behavior of hearing-impaired listeners and hearing aid users for comparison. Questionnaires were used to evaluate the subjective experience in the VEs. A test-retest comparison showed that the measured movement behavior is reproducible and that the measures of movement behavior used in this study are reliable. Moreover, evaluation of the questionnaires indicated that the VEs are sufficiently realistic. The participants rated the experienced acoustic realism of the VEs positively, and although the rating of the experienced visual realism was lower, the participants felt to some extent present and involved in the VEs. Analysis of the movement data showed that movement behavior depends on the VE and the age of the subject and is predictable in multitalker conversations and for moving distractors. The VEs and a database of the collected data are publicly available.


Assuntos
Acústica , Movimento/fisiologia , Percepção da Fala/fisiologia , Realidade Virtual , Adulto , Idoso , Eletroencefalografia , Feminino , Voluntários Saudáveis , Auxiliares de Audição , Perda Auditiva/diagnóstico , Testes Auditivos/métodos , Humanos , Masculino , Pessoa de Meia-Idade , Inquéritos e Questionários , Adulto Jovem
19.
J Speech Lang Hear Res ; 62(1): 177-189, 2019 01 30.
Artigo em Inglês | MEDLINE | ID: mdl-30534994

RESUMO

Purpose For elderly listeners, it is more challenging to listen to 1 voice surrounded by other voices than for young listeners. This could be caused by a reduced ability to use acoustic cues-such as slight differences in onset time-for the segregation of concurrent speech signals. Here, we study whether the ability to benefit from onset asynchrony differs between young (18-33 years) and elderly (55-74 years) listeners. Method We investigated young (normal hearing, N = 20) and elderly (mildly hearing impaired, N = 26) listeners' ability to segregate 2 vowels with onset asynchronies ranging from 20 to 100 ms. Behavioral measures were complemented by a specific event-related brain potential component, the object-related negativity, indicating the perception of 2 distinct auditory objects. Results Elderly listeners' behavioral performance (identification accuracy of the 2 vowels) was considerably poorer than young listeners'. However, both age groups showed the same amount of improvement with increasing onset asynchrony. Object-related negativity amplitude also increased similarly in both age groups. Conclusion Both age groups benefit to a similar extent from onset asynchrony as a cue for concurrent speech segregation during active (behavioral measurement) and during passive (electroencephalographic measurement) listening.


Assuntos
Acústica da Fala , Percepção da Fala/fisiologia , Adulto , Fatores Etários , Idoso , Análise de Variância , Audiometria , Limiar Auditivo , Sinais (Psicologia) , Eletroencefalografia , Feminino , Humanos , Masculino , Pessoa de Meia-Idade , Adulto Jovem
20.
Audiol Res ; 8(2): 215, 2018 Oct 02.
Artigo em Inglês | MEDLINE | ID: mdl-30581544

RESUMO

Hearing loss can negatively influence the spatial hearing abilities of hearing-impaired listeners, not only in static but also in dynamic auditory environments. Therefore, ways of addressing these deficits with advanced hearing aid algorithms need to be investigated. In a previous study based on virtual acoustics and a computer simulation of different bilateral hearing aid fittings, we investigated auditory source movement detectability in older hearing- impaired (OHI) listeners. We found that two directional processing algorithms could substantially improve the detectability of left-right and near-far source movements in the presence of reverberation and multiple interfering sounds. In the current study, we carried out similar measurements with a loudspeaker-based setup and wearable hearing aids. We fitted a group of 15 OHI listeners with bilateral behind-the-ear devices that were programmed to have three different directional processing settings. Apart from source movement detectability, we assessed two other aspects of spatial awareness perception. Using a street scene with up to five environmental sound sources, the participants had to count the number of presented sources or to indicate the movement direction of a single target signal. The data analyses showed a clear influence of the number of concurrent sound sources and the starting position of the moving target signal on the participants' performance, but no influence of the different hearing aid settings. Complementary artificial head recordings showed that the acoustic differences between the three hearing aid settings were rather small. Another explanation for the lack of effects of the tested hearing aid settings could be that the simulated street scenario was not sufficiently sensitive. Possible ways of improving the sensitivity of the laboratory measures while maintaining high ecological validity and complexity are discussed.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...