Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 12 de 12
Filter
Add more filters










Publication year range
1.
J Clin Med ; 13(11)2024 May 27.
Article in English | MEDLINE | ID: mdl-38892853

ABSTRACT

Background: This study investigated how different hearing profiles influenced melodic contour identification (MCI) in a real-world concert setting with a live band including drums, bass, and a lead instrument. We aimed to determine the impact of various auditory assistive technologies on music perception in an ecologically valid environment. Methods: The study involved 43 participants with varying hearing capabilities: normal hearing, bilateral hearing aids, bimodal hearing, single-sided cochlear implants, and bilateral cochlear implants. Participants were exposed to melodies played on a piano or accordion, with and without an electric bass as a masker, accompanied by a basic drum rhythm. Bayesian logistic mixed-effects models were utilized to analyze the data. Results: The introduction of an electric bass as a masker did not significantly affect MCI performance for any hearing group when melodies were played on the piano, contrary to its effect on accordion melodies and previous studies. Greater challenges were observed with accordion melodies, especially when accompanied by an electric bass. Conclusions: MCI performance among hearing aid users was comparable to other hearing-impaired profiles, challenging the hypothesis that they would outperform cochlear implant users. A cohort of short melodies inspired by Western music styles was developed for future contour identification tasks.

2.
Cochlear Implants Int ; : 1-7, 2024 May 14.
Article in English | MEDLINE | ID: mdl-38745418

ABSTRACT

OBJECTIVES: To study the level of social well-being for children with hearing loss (HL) using self-completed questionnaires. METHODS: The data sample relates to a total of 22 children representing a new group of children with HL. This new group is defined as HL detected through neonatal hearing screening and fitted with hearing technology when relevant before 6 months, received bilateral cochlear implants before one year of age followed by specific educational training using the auditory-verbal practice. The age range was from 9 to 12 years. Two self-completed questionnaires were used: The California Bullying Victimisation Scale (CBVS) and the Strengths and Difficulties Questionnaire (SDQ). The project design was a prospective case series. RESULTS: Self-completed assessments revealed levels of social well-being for both questionnaires comparable to populations with normal hearing. CBVS results showed that a total of 52.6% reported being 'not a victim', 36.8% peer victims and 10.5% bully victims. Results from SDQ revealed that 94.7% of the children reported being within the normal level for scores on both social strength and difficulties, 5.3% scored slightly raised/lowered and 0% had high/low scores or very high/low scores. CONCLUSION: The new group of children with HL presented with self-completed scores comparable to peers with normal hearing. It is time to raise expectations for children with HL in terms of not only outcomes on audition and spoken language but also most importantly on levels of social well-being. Furthermore, it is discussed whether this new group can also be defined as a new generation of children with HL.

3.
Int J Pediatr Otorhinolaryngol ; 176: 111825, 2024 Jan.
Article in English | MEDLINE | ID: mdl-38128354

ABSTRACT

The study investigated how inclusion of the considerable amount of knowledge generated through basic research in multisensory experiences can be brought into clinical paediatric audiology with a specific focus to enhance understanding of the neurological implications of childhood hearing loss. OBJECTIVES: The overall aim of the project was to investigate how to use emerging technologies to enhance the understanding of the neurological impact of paediatric hearing loss. The specific objectives were to develop an app and to evaluate its ease of use and the understanding of neurology by all types of stakeholders and end-users. METHODS: A collaborative participatory and human centred research design was used. This methodological approach brought stakeholders into the design process at an early point of time and workshops mapped the content and interaction of the iterative development of the app. Nine clinicians from Copenhagen Hearing and Balance Centre and 4 media technologists from Multisensory Experience Lab participated in the development of the app-prototype. Evaluations were made by use of questionnaires completed by stakeholders and end-users and focus group interviews. Eight parents with children with hearing loss, 13 internal stakeholders and 14 external stakeholders participated in the evaluation of the app. RESULTS: The app was overall positively evaluated. End users/parents with children with hearing loss were slightly more positive than stakeholders/professionals in audiology. CONCLUSIONS: Apps are a future media for providing health care information and it proved both relevant and applicable to start using apps also to provide complex information such as neurological implications of childhood hearing loss.


Subject(s)
Deafness , Hearing Loss , Humans , Child , Digital Technology , Hearing Loss/diagnosis , Hearing , Focus Groups
4.
J Acoust Soc Am ; 149(5): 3502, 2021 May.
Article in English | MEDLINE | ID: mdl-34241147

ABSTRACT

Collision modelling represents an active field of research in musical acoustics. Common examples of collisions include the hammer-string interaction in the piano, the interaction of strings with fretboards and fingers, the membrane-wire interaction in the snare drum, reed-beating effects in wind instruments, and others. At the modelling level, many current approaches make use of conservative potentials in the form of power-laws, and discretisations proposed for such models rely in all cases on iterative root-finding routines. Here, a method based on energy quadratisation of the nonlinear collision potential is proposed. It is shown that there exists a suitable discretisation of such a model that may be resolved in a single iteration, while guaranteeing stability via energy conservation. Applications to the case of lumped as well as fully distributed systems will be given, using both finite-difference and modal methods.

5.
IEEE Trans Vis Comput Graph ; 26(5): 1912-1922, 2020 05.
Article in English | MEDLINE | ID: mdl-32070968

ABSTRACT

Directivity and gain in microphone array systems for hearing aids or hearable devices allow users to acoustically enhance the information of a source of interest. This source is usually positioned directly in front. This feature is called acoustic beamforming. The current study aimed to improve users' interactions with beamforming via a virtual prototyping approach in immersive virtual environments (VEs). Eighteen participants took part in experimental sessions composed of a calibration procedure and a selective auditory attention voice-pairing task. Eight concurrent speakers were placed in an anechoic environment in two virtual reality (VR) scenarios. The scenarios were a purely virtual scenario and a realistic 360° audio-visual recording. Participants were asked to find an individual optimal parameterization for three different virtual beamformers: (i) head-guided, (ii) eye gaze-guided, and (iii) a novel interaction technique called dual beamformer, where head-guided is combined with an additional hand-guided beamformer. None of the participants were able to complete the task without a virtual beamformer (i.e., in normal hearing condition) due to the high complexity introduced by the experimental design. However, participants were able to correctly pair all speakers using all three proposed interaction metaphors. Providing superhuman hearing abilities in the form of a dual acoustic beamformer guided by head and hand movements resulted in statistically significant improvements in terms of pairing time, suggesting the task-relevance of interacting with multiple points of interests.


Subject(s)
Acoustics/instrumentation , Hearing Aids , Hearing/physiology , Signal Processing, Computer-Assisted/instrumentation , Virtual Reality , Acoustic Stimulation , Adult , Auditory Perception/physiology , Equipment Design , Female , Humans , Male , Young Adult
6.
IEEE Trans Vis Comput Graph ; 25(5): 1876-1886, 2019 05.
Article in English | MEDLINE | ID: mdl-30794514

ABSTRACT

Being able to hear objects in an environment, for example using echolocation, is a challenging task. The main goal of the current work is to use virtual environments (VEs) to train novice users to navigate using echolocation. Previous studies have shown that musicians are able to differentiate sound pulses from reflections. This paper presents design patterns for VE simulators for both training and testing procedures, while classifying users' navigation strategies in the VE. Moreover, the paper presents features that increase users' performance in VEs. We report the findings of two user studies: a pilot test that helped improve the sonic interaction design, and a primary study exposing participants to a spatial orientation task during four conditions which were early reflections (RF), late reverberation (RV), early reflections-reverberation (RR) and visual stimuli (V). The latter study allowed us to identify navigation strategies among the users. Some users (10/26) reported an ability to create spatial cognitive maps during the test with auditory echoes, which may explain why this group performed better than the remaining participants in the RR condition.


Subject(s)
Feedback, Sensory/physiology , Virtual Reality , Acoustic Stimulation , Animals , Auditory Perception/physiology , Chiroptera/physiology , Computer Graphics , Echolocation/physiology , Female , Humans , Male , Orientation, Spatial , Space Perception , User-Computer Interface
7.
IEEE Comput Graph Appl ; 38(2): 31-43, 2018 03.
Article in English | MEDLINE | ID: mdl-29672254

ABSTRACT

A high-fidelity but efficient sound simulation is an essential element of any VR experience. Many of the techniques used in virtual acoustics are graphical rendering techniques suitably modified to account for sound generation and propagation. In recent years, several advances in hardware and software technologies have been facilitating the development of immersive interactive sound-rendering experiences. In this article, we present a review of the state of the art of such simulations, with a focus on the different elements that, combined, provide a complete interactive sonic experience. This includes physics-based simulation of sound effects and their propagation in space together with binaural rendering to simulate the position of sound sources. We present how these different elements of the sound design pipeline have been addressed in the literature, trying to find the trade-off between accuracy and plausibility. Recent applications and current challenges are also presented.

8.
IEEE Comput Graph Appl ; 38(2): 44-56, 2018 03.
Article in English | MEDLINE | ID: mdl-29672255

ABSTRACT

Virtual reality users wearing head-mounted displays can experience the illusion of walking in any direction for infinite distance while, in reality, they are walking a curvilinear path in physical space. This is accomplished by introducing unnoticeable rotations to the virtual environment-a technique called redirected walking. This paper gives an overview of the research that has been performed since redirected walking was first practically demonstrated 15 years ago.

9.
Front Neurol ; 7: 1, 2016.
Article in English | MEDLINE | ID: mdl-26834696

ABSTRACT

In this review article, we summarize systems for gait rehabilitation based on instrumented footwear and present a context of their usage in Parkinson's disease (PD) patients' auditory and haptic rehabilitation. We focus on the needs of PD patients, but since only a few systems were made with this purpose, we go through several applications used in different scenarios when gait detection and rehabilitation are considered. We present developments of the designs, possible improvements, and software challenges and requirements. We conclude that in order to build successful systems for PD patients' gait rehabilitation, technological solutions from several studies have to be applied and combined with knowledge from auditory and haptic cueing.

10.
IEEE Trans Vis Comput Graph ; 20(4): 569-78, 2014 Apr.
Article in English | MEDLINE | ID: mdl-24650984

ABSTRACT

Walking-In-Place (WIP) techniques make it possible to facilitate relatively natural locomotion within immersive virtual environments that are larger than the physical interaction space. However, in order to facilitate natural walking experiences one needs to know how to map steps in place to virtual motion. This paper describes two within-subjects studies performed with the intention of establishing the range of perceptually natural walking speeds for WIP locomotion. In both studies, subjects performed a series of virtual walks while exposed to visual gains (optic flow multipliers) ranging from 1.0 to 3.0. Thus, the slowest speed was equal to an estimate of the subjects normal walking speed, while the highest speed was three times greater. The perceived naturalness of the visual speed was assessed using self-reports. The first study compared four different types of movement, namely, no leg movement, walking on a treadmill, and two forms of gestural input for WIP locomotion. The results suggest that WIP locomotion is accompanied by a perceptual distortion of the speed of optic flow. The second study was performed using a 4×2 factorial design and compared four different display field-of-views (FOVs) and two types of movement, walking on a treadmill and WIP locomotion. The results revealed significant main effects of both movement type and field of view, but no significant interaction between the two variables. Particularly, they suggest that the size of the display FOV is inversely proportional to the degree of underestimation of the virtual speeds for both treadmill-mediated virtual walking and WIP locomotion. Combined, the results constitute a first attempt at establishing a set of guidelines specifying what virtual walking speeds WIP gestures should produce in order to facilitate a natural walking experience.


Subject(s)
Computer Graphics , Gait/physiology , Motion Perception/physiology , Task Performance and Analysis , User-Computer Interface , Walking/physiology , Adult , Aged , Female , Humans , Male , Middle Aged , Physical Exertion/physiology , Reproducibility of Results , Sensitivity and Specificity , Young Adult
11.
IEEE Trans Haptics ; 6(1): 35-45, 2013.
Article in English | MEDLINE | ID: mdl-24808266

ABSTRACT

In this paper, we describe several experiments whose goal is to evaluate the role of plantar vibrotactile feedback in enhancing the realism of walking experiences in multimodal virtual environments. To achieve this goal we built an interactive and a noninteractive multimodal feedback system. While during the use of the interactive system subjects physically walked, during the use of the noninteractive system the locomotion was simulated while subjects were sitting on a chair. In both the configurations subjects were exposed to auditory and audio-visual stimuli presented with and without the haptic feedback. Results of the experiments provide a clear preference toward the simulations enhanced with haptic feedback showing that the haptic channel can lead to more realistic experiences in both interactive and noninteractive configurations. The majority of subjects clearly appreciated the added feedback. However, some subjects found the added feedback unpleasant. This might be due, on one hand, to the limits of the haptic simulation and, on the other hand, to the different individual desire to be involved in the simulations. Our findings can be applied to the context of physical navigation in multimodal virtual environments as well as to enhance the user experience of watching a movie or playing a video game.


Subject(s)
Feedback, Sensory/physiology , Touch Perception/physiology , User-Computer Interface , Walking/physiology , Adult , Computer Simulation , Female , Humans , Male , Young Adult
12.
IEEE Trans Vis Comput Graph ; 17(9): 1234-44, 2011 Sep.
Article in English | MEDLINE | ID: mdl-21737860

ABSTRACT

We propose a system that affords real-time sound synthesis of footsteps on different materials. The system is based on microphones, which detect real footstep sounds from subjects, from which the ground reaction force (GRF) is estimated. Such GRF is used to control a sound synthesis engine based on physical models. Two experiments were conducted. In the first experiment, the ability of subjects to recognize the surface they were exposed to was assessed. In the second experiment, the sound synthesis engine was enhanced with environmental sounds. Results show that, in some conditions, adding a soundscape significantly improves the recognition of the simulated environment.


Subject(s)
Foot/physiology , Sound , User-Computer Interface , Video Games , Humans
SELECTION OF CITATIONS
SEARCH DETAIL
...