Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 6 de 6
Filter
Add more filters










Database
Language
Publication year range
1.
J Speech Lang Hear Res ; 66(11): 4575-4589, 2023 11 09.
Article in English | MEDLINE | ID: mdl-37850878

ABSTRACT

PURPOSE: There is a need for tools to study real-world communication abilities in people with hearing loss. We outline a potential method for this that analyzes gaze and use it to answer the question of when and how much listeners with hearing loss look toward a new talker in a conversation. METHOD: Twenty-two older adults with hearing loss followed a prerecorded two-person audiovisual conversation in the presence of babble noise. We compared their eye-gaze direction to the conversation in two multilevel logistic regression (MLR) analyses. First, we split the conversation into events classified by the number of active talkers within a turn or a transition, and we tested if these predicted the listener's gaze. Second, we mapped the odds that a listener gazed toward a new talker over time during a conversation transition. RESULTS: We found no evidence that our conversation events predicted changes in the listener's gaze, but the listener's gaze toward the new talker during a silence-transition was predicted by time: The odds of looking at the new talker increased in an s-shaped curve from at least 0.4 s before to 1 s after the onset of the new talker's speech. A comparison of models with different random effects indicated that more variance was explained by differences between individual conversation events than by differences between individual listeners. CONCLUSIONS: MLR modeling of eye-gaze during talker transitions is a promising approach to study a listener's perception of realistic conversation. Our experience provides insight to guide future research with this method.


Subject(s)
Deafness , Hearing Loss , Speech Perception , Humans , Aged , Acoustic Stimulation/methods , Speech
2.
J Speech Lang Hear Res ; 66(10): 4009-4024, 2023 10 04.
Article in English | MEDLINE | ID: mdl-37625145

ABSTRACT

PURPOSE: The purpose of this work was to study the effects of background noise and hearing attenuation associated with earplugs on three physiological measures, assumed to be markers of effort investment and arousal, during interactive communication. METHOD: Twelve pairs of older people (average age of 63.2 years) with age-adjusted normal hearing took part in a face-to-face communication to solve a Diapix task. Communication was held in different levels of babble noise (0, 60, and 70 dBA) and with two levels of hearing attenuation (0 and 25 dB) in quiet. The physiological measures obtained included pupil size, heart rate variability, and skin conductance. In addition, subjective ratings of perceived communication success, frustration, and effort were obtained. RESULTS: Ratings of perceived success, frustration, and effort confirmed that communication was more difficult in noise and with approximately 25-dB hearing attenuation and suggested that the implemented levels of noise and hearing attenuation resulted in comparable communication difficulties. Background noise at 70 dBA and hearing attenuation both led to an initial increase in pupil size (associated with effort), but only the effect of the background noise was sustained throughout the conversation. The 25-dB hearing attenuation led to a significant decrease of the high-frequency power of heart rate variability and a significant increase of skin conductance level, measured as the average z value of the electrodermal activity amplitude. CONCLUSION: This study demonstrated that several physiological measures appear to be viable indicators of changing communication conditions, with pupillometry and cardiovascular as well as electrodermal measures potentially being markers of communication difficulty.


Subject(s)
Hearing Loss, Conductive , Speech Perception , Humans , Aged , Middle Aged , Noise , Hearing/physiology , Communication , Hearing Tests , Speech Perception/physiology
3.
Front Neurosci ; 16: 873201, 2022.
Article in English | MEDLINE | ID: mdl-35844213

ABSTRACT

This presentation details and evaluates a method for estimating the attended speaker during a two-person conversation by means of in-ear electro-oculography (EOG). Twenty-five hearing-impaired participants were fitted with molds equipped with EOG electrodes (in-ear EOG) and wore eye-tracking glasses while watching a video of two life-size people in a dialog solving a Diapix task. The dialogue was directionally presented and together with background noise in the frontal hemisphere at 60 dB SPL. During three conditions of steering (none, in-ear EOG, conventional eye-tracking), participants' comprehension was periodically measured using multiple-choice questions. Based on eye movement detection by in-ear EOG or conventional eye-tracking, the estimated attended speaker was amplified by 6 dB. In the in-ear EOG condition, the estimate was based on one selected channel pair of electrodes out of 36 possible electrodes. A novel calibration procedure introducing three different metrics was used to select the measurement channel. The in-ear EOG attended speaker estimates were compared to those of the eye-tracker. Across participants, the mean accuracy of in-ear EOG estimation of the attended speaker was 68%, ranging from 50 to 89%. Based on offline simulation, it was established that higher scoring metrics obtained for a channel with the calibration procedure were significantly associated with better data quality. Results showed a statistically significant improvement in comprehension of about 10% in both steering conditions relative to the no-steering condition. Comprehension in the two steering conditions was not significantly different. Further, better comprehension obtained under the in-ear EOG condition was significantly correlated with more accurate estimation of the attended speaker. In conclusion, this study shows promising results in the use of in-ear EOG for visual attention estimation with potential for applicability in hearing assistive devices.

4.
Front Digit Health ; 3: 724714, 2021.
Article in English | MEDLINE | ID: mdl-34713193

ABSTRACT

Introduction: By means of adding more sensor technology, modern hearing aids (HAs) strive to become better, more personalized, and self-adaptive devices that can handle environmental changes and cope with the day-to-day fitness of the users. The latest HA technology available in the market already combines sound analysis with motion activity classification based on accelerometers to adjust settings. While there is a lot of research in activity tracking using accelerometers in sports applications and consumer electronics, there is not yet much in hearing research. Objective: This study investigates the feasibility of activity tracking with ear-level accelerometers and how it compares to waist-mounted accelerometers, which is a more common measurement location. Method: The activity classification methods in this study are based on supervised learning. The experimental set up consisted of 21 subjects, equipped with two XSens MTw Awinda at ear-level and one at waist-level, performing nine different activities. Results: The highest accuracy on our experimental data as obtained with the combination of Bagging and Classification tree techniques. The total accuracy over all activities and users was 84% (ear-level), 90% (waist-level), and 91% (ear-level + waist-level). Most prominently, the classes, namely, standing, jogging, laying (on one side), laying (face-down), and walking all have an accuracy of above 90%. Furthermore, estimated ear-level step-detection accuracy was 95% in walking and 90% in jogging. Conclusion: It is demonstrated that several activities can be classified, using ear-level accelerometers, with an accuracy that is on par with waist-level. It is indicated that step-detection accuracy is comparable to a high-performance wrist device. These findings are encouraging for the development of activity applications in hearing healthcare.

5.
Front Neurosci ; 13: 1294, 2019.
Article in English | MEDLINE | ID: mdl-31920477

ABSTRACT

People with hearing impairment typically have difficulties following conversations in multi-talker situations. Previous studies have shown that utilizing eye gaze to steer audio through beamformers could be a solution for those situations. Recent studies have shown that in-ear electrodes that capture electrooculography in the ear (EarEOG) can estimate the eye-gaze relative to the head, when the head was fixed. The head movement can be estimated using motion sensors around the ear to create an estimate of the absolute eye-gaze in the room. In this study, an experiment was designed to mimic a multi-talker situation in order to study and model the EarEOG signal when participants attempted to follow a conversation. Eleven hearing impaired participants were presented speech from the DAT speech corpus (Bo Nielsen et al., 2014), with three targets positioned at -30°, 0° and +30° azimuth. The experiment was run in two setups: one where the participants had their head fixed in a chinrest, and the other where they were free to move their head. The participants' task was to focus their visual attention on an LED-indicated target that changed regularly. A model was developed for the relative eye-gaze estimation, taking saccades, fixations, head movement and drift from the electrode-skin half-cell into account. This model explained 90.5% of the variance of the EarEOG when the head was fixed, and 82.6% when the head was free. The absolute eye-gaze was also estimated utilizing that model. When the head was fixed, the estimation of the absolute eye-gaze was reliable. However, due to hardware issues, the estimation of the absolute eye-gaze when the head was free had a variance that was too large to reliably estimate the attended target. Overall, this study demonstrated the potential of estimating absolute eye-gaze using EarEOG and motion sensors around the ear.

6.
Am J Audiol ; 27(3S): 403-416, 2018 Nov 19.
Article in English | MEDLINE | ID: mdl-30452745

ABSTRACT

PURPOSE: The successful design and innovation of eHealth solutions directly involve end users in the process to seek a better understanding of their needs. This article presents user-innovated eHealth solutions targeting older persons with hearing impairment. Our research question was: What are the key users' needs, expectations, and visions within future hearing rehabilitation service delivery? METHOD: We applied a participatory design approach to facilitate the design of future eHealth solutions via focus groups. We involved older persons with hearing impairment (n = 36), significant others (n = 10), and audiologists (n = 8) following 2 methods: (a) human-centered design for interactive systems and (b) user innovation management. Through 3 rounds of focus groups, we facilitated a process progressing from insights and visions for requirements (Phase 1), to app such as paper version wireframes (Phase 2), and to digital prototypes envisioning future eHealth solutions (Phase 3). Each focus group was video-recorded and photographed, resulting in a rich data set that was analyzed through inductive thematic analysis. RESULTS: The results are presented via (a) a storyboard envisioning future client journeys, (b) 3 key themes for future eHealth solutions, (c) 4 levels of interest and willingness to invest time and effort in digital solutions, and (d) 2 technical savviness types and their different preferences for rehabilitation strategies. CONCLUSIONS: Future eHealth solutions must offer personalized rehabilitation strategies that are appropriate for every person with hearing impairment and their level of technical savviness. Thus, a central requirement is anchoring of digital support in the clients' everyday life situations by facilitating easy access to personalized information, communication, and learning milieus. Moreover, the participants' visions for eHealth solutions call for providing both traditional analogue and digital services. SUPPLEMENTAL MATERIAL: https://doi.org/10.23641/asha.7310729.


Subject(s)
Audiology/methods , Delivery of Health Care/methods , Hearing Loss/rehabilitation , Stakeholder Participation , Telemedicine/methods , Aged , Aged, 80 and over , Audiologists , Community-Based Participatory Research , Female , Focus Groups , Humans , Male , Middle Aged , Organizational Innovation , Qualitative Research
SELECTION OF CITATIONS
SEARCH DETAIL
...