Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 6 de 6
Filter
Add more filters










Database
Language
Publication year range
1.
Trends Hear ; 27: 23312165231188619, 2023.
Article in English | MEDLINE | ID: mdl-37475460

ABSTRACT

Speech intelligibility in cocktail party situations has been traditionally studied for stationary sound sources and stationary participants. Here, speech intelligibility and behavior were investigated during active self-rotation of standing participants in a spatialized speech test. We investigated if people would rotate to improve speech intelligibility, and we asked if knowing the target location would be further beneficial. Target sentences randomly appeared at one of four possible locations: 0°, ± 90°, 180° relative to the participant's initial orientation on each trial, while speech-shaped noise was presented from the front (0°). Participants responded naturally with self-rotating motion. Target sentences were presented either without (Audio-only) or with a picture of an avatar (Audio-Visual). In a baseline (Static) condition, people were standing still without visual location cues. Participants' self-orientation undershot the target location and orientations were close to acoustically optimal. Participants oriented more often in an acoustically optimal way, and speech intelligibility was higher in the Audio-Visual than in the Audio-only condition for the lateral targets. The intelligibility of the individual words in Audio-Visual and Audio-only increased during self-rotation towards the rear target, but it was reduced for the lateral targets when compared to Static, which could be mostly, but not fully, attributed to changes in spatial unmasking. Speech intelligibility prediction based on a model of static spatial unmasking considering self-rotations overestimated the participant performance by 1.4 dB. The results suggest that speech intelligibility is reduced during self-rotation, and that visual cues of location help to achieve more optimal self-rotations and better speech intelligibility.


Subject(s)
Sound Localization , Speech Perception , Humans , Speech Intelligibility , Noise , Sound
2.
J Acoust Soc Am ; 150(5): 3593, 2021 11.
Article in English | MEDLINE | ID: mdl-34852598

ABSTRACT

This study describes data on auditory-visual integration and visually-guided adaptation of auditory distance perception using the ventriloquism effect (VE) and ventriloquism aftereffect (VAE). In an experiment, participants judged egocentric distance of interleaved auditory or auditory-visual stimuli with the auditory component located from 0.7 to 2.04 m in front of listeners in a real reverberant environment. The visual component of auditory-visual stimuli was displaced 30% closer (V-closer), 30% farther (V-farther), or aligned (V-aligned) with respect to the auditory component. The VE and VAE were measured in auditory and auditory-visual trials, respectively. Both effects were approximately independent of target distance when expressed in logarithmic units. The VE strength, defined as a difference of V-misaligned and V-aligned response bias, was approximately 72% of the auditory-visual disparity regardless of the visual-displacement direction, while the VAE was stronger in the V-farther (44%) than the V-closer (31%) condition. The VAE persisted to post-adaptation auditory-only blocks of trials, although it was diminished. The rates of build-up/break-down of the VAE were asymmetrical, with slower adaptation in the V-closer condition. These results suggest that auditory-visual distance integration is independent of the direction of induced shift, while the re-calibration is stronger and faster when evoked by more distant visual stimuli.


Subject(s)
Distance Perception , Sound Localization , Acoustic Stimulation , Auditory Perception , Humans , Photic Stimulation , Visual Perception
3.
Trends Hear ; 23: 2331216519876795, 2019.
Article in English | MEDLINE | ID: mdl-31547776

ABSTRACT

Superdirectional acoustic beamforming technology provides a high signal-to-noise ratio, but potential speech intelligibility benefits to hearing aid users are limited by the way the users move their heads. Steering the beamformer using eye gaze instead of head orientation could mitigate this problem. This study investigated the intelligibility of target speech with a dynamically changing direction when heard through gaze-controlled (GAZE) or head-controlled (HEAD) superdirectional simulated beamformers. The beamformer provided frequency-independent noise attenuation of either 8 dB (WIDE [moderately directional]) or 12 dB (NARROW [highly directional]) relative to no beamformer referred as the OMNI (omni-directional) condition. Before the main experiment, signal-to-noise ratios were normalized for each participant and each beam width condition to yield equal percentage of correct performance in a reference condition. Hence, results are presented as normalized speech intelligibility (NSI). In an ongoing presentation, the participants (n = 17), of varying degree of hearing loss, heard single-word targets every 1.5 s coming from either left (-30°) or right (+30°) presented in continuous, spatially distributed, speech-shaped noise. When the target was static, NSI was better in the GAZE than in the HEAD condition, but only when the beam was NARROW. When the target switched location without warning, NSI performance dropped. In this case, the WIDE HEAD condition provided the best average NSI performance, because some participants tended to orient their head in between the targets, allowing them to hear out the target regardless of location. The difference in NSI between GAZE and HEAD conditions for individual participants was related to the observed head-orientation strategy, which varied widely across participants.


Subject(s)
Acoustics/instrumentation , Hearing Aids/standards , Speech Perception , Adult , Hearing , Hearing Loss , Humans , Signal-To-Noise Ratio , Speech Intelligibility
4.
PLoS One ; 13(1): e0190420, 2018.
Article in English | MEDLINE | ID: mdl-29304120

ABSTRACT

The manuscript proposes and evaluates a real-time algorithm for estimating eye gaze angle based solely on single-channel electrooculography (EOG), which can be obtained directly from the ear canal using conductive ear moulds. In contrast to conventional high-pass filtering, we used an algorithm that calculates absolute eye gaze angle via statistical analysis of detected saccades. The estimated eye positions of the new algorithm were still noisy. However, the performance in terms of Pearson product-moment correlation coefficients was significantly better than the conventional approach in some instances. The results suggest that in-ear EOG signals captured with conductive ear moulds could serve as a basis for light-weight and portable horizontal eye gaze angle estimation suitable for a broad range of applications. For instance, for hearing aids to steer the directivity of microphones in the direction of the user's eye gaze.


Subject(s)
Electrooculography/methods , Saccades , Algorithms , Humans
5.
J Acoust Soc Am ; 142(5): 3288, 2017 11.
Article in English | MEDLINE | ID: mdl-29195483

ABSTRACT

Two experiments examined plasticity induced by context in a simple target localization task. The context was represented by interleaved localization trials with the target preceded by a distractor. In a previous study, the context induced large response shifts when the target and distractor stimuli were identical 2-ms-noise clicks [Kopco, Best, and Shinn-Cunningham (2007). J. Acoust. Soc. Am. 121, 420-432]. Here, the temporal characteristics of the contextual effect were examined for the same stimuli. Experiment 1 manipulated the context presentation rate and the distractor-target inter-stimulus interval (ISI). Experiment 2 manipulated the temporal structure of the context stimulus, replacing the one-click distractor either by a distractor consisting of eight sequentially presented clicks or by a noise burst with total energy and duration identical to the eight-click distractor. In experiment 1, the contextual shift size increased with increasing context rate while being largely independent of ISI. In experiment 2, the eight-click-distractor induced a stronger shift than the one-click-distractor context, while the noise-distractor context induced a very small shift. These results suggest that contextual plasticity is an adaptation driven both by low-level factors like spatiotemporal context distribution and higher-level factors like perceptual similarity between the stimuli, possibly related to precedence buildup.


Subject(s)
Cues , Periodicity , Sound Localization , Acoustic Stimulation/methods , Adaptation, Psychological , Adult , Female , Humans , Male , Noise/adverse effects , Perceptual Masking , Psychoacoustics , Time Factors , Young Adult
6.
J Acoust Soc Am ; 137(4): EL281-7, 2015 Apr.
Article in English | MEDLINE | ID: mdl-25920878

ABSTRACT

Localization of a 2-ms-click target was previously shown to be influenced by interleaved localization trials in which the target was preceded by an identical distractor [Kopco, Best, and Shinn-Cunningham (2007). J. Acoust. Soc. Am. 121, 420-432]. Here, two experiments were conducted to explore this contextual effect. Results show that context-related bias is not eliminated (1) when the response method is changed so that vision is available or that no hand-pointing is required; or (2) when the distractor-target order is reversed. Additionally, a keyboard-based localization response method is introduced and shown to be more accurate than traditional pointer-based methods.

SELECTION OF CITATIONS
SEARCH DETAIL
...