Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 66
Filter
1.
J Acoust Soc Am ; 155(5): 3118-3131, 2024 May 01.
Article in English | MEDLINE | ID: mdl-38727551

ABSTRACT

In daily life, natural or man-made structures influence sound propagation, causing reflections and diffraction with potential effects on auditory spatial perception. While the effect of isolated reflections on binaural localization has been investigated, consequences of edge diffraction on spatial perception have received less attention. Here, effects of edge diffraction on the horizontal localization of a sound source were assessed when a flat square plate occludes the direct sound or produces a reflection in an otherwise anechoic environment. Binaural recordings were obtained with an artificial head for discrete sound source positions along two horizontal trajectories in the vicinity of the plate, including conditions near the incident and reflection shadow boundary. In a listening test, the apparent source position was matched for conditions with and without the plate, resulting in azimuth offsets between the apparent and physical source of up to 12°. The perceived direction of occluded frontal sound sources was laterally shifted to the visible region near the edge of the plate. Geometrical-acoustics-based simulations with different methods to binaurally render diffracted sound paths were technically and perceptually compared to the measurements. The observed localization offset was reproduced with the acoustic simulations when diffraction was rendered considering the individual ear positions.

2.
PeerJ ; 12: e17104, 2024.
Article in English | MEDLINE | ID: mdl-38680894

ABSTRACT

Advancements in cochlear implants (CIs) have led to a significant increase in bilateral CI users, especially among children. Yet, most bilateral CI users do not fully achieve the intended binaural benefit due to potential limitations in signal processing and/or surgical implant positioning. One crucial auditory cue that normal hearing (NH) listeners can benefit from is the interaural time difference (ITD), i.e., the time difference between the arrival of a sound at two ears. The ITD sensitivity is thought to be heavily relying on the effective utilization of temporal fine structure (very rapid oscillations in sound). Unfortunately, most current CIs do not transmit such true fine structure. Nevertheless, bilateral CI users have demonstrated sensitivity to ITD cues delivered through envelope or interaural pulse time differences, i.e., the time gap between the pulses delivered to the two implants. However, their ITD sensitivity is significantly poorer compared to NH individuals, and it further degrades at higher CI stimulation rates, especially when the rate exceeds 300 pulse per second. The overall purpose of this research thread is to improve spatial hearing abilities in bilateral CI users. This study aims to develop electroencephalography (EEG) paradigms that can be used with clinical settings to assess and optimize the delivery of ITD cues, which are crucial for spatial hearing in everyday life. The research objective of this article was to determine the effect of CI stimulation pulse rate on the ITD sensitivity, and to characterize the rate-dependent degradation in ITD perception using EEG measures. To develop protocols for bilateral CI studies, EEG responses were obtained from NH listeners using sinusoidal-amplitude-modulated (SAM) tones and filtered clicks with changes in either fine structure ITD (ITDFS) or envelope ITD (ITDENV). Multiple EEG responses were analyzed, which included the subcortical auditory steady-state responses (ASSRs) and cortical auditory evoked potentials (CAEPs) elicited by stimuli onset, offset, and changes. Results indicated that acoustic change complex (ACC) responses elicited by ITDENV changes were significantly smaller or absent compared to those elicited by ITDFS changes. The ACC morphologies evoked by ITDFS changes were similar to onset and offset CAEPs, although the peak latencies were longest for ACC responses and shortest for offset CAEPs. The high-frequency stimuli clearly elicited subcortical ASSRs, but smaller than those evoked by lower carrier frequency SAM tones. The 40-Hz ASSRs decreased with increasing carrier frequencies. Filtered clicks elicited larger ASSRs compared to high-frequency SAM tones, with the order being 40 > 160 > 80> 320 Hz ASSR for both stimulus types. Wavelet analysis revealed a clear interaction between detectable transient CAEPs and 40-Hz ASSRs in the time-frequency domain for SAM tones with a low carrier frequency.


Subject(s)
Cochlear Implants , Cues , Electroencephalography , Humans , Electroencephalography/methods , Acoustic Stimulation/methods , Sound Localization/physiology , Auditory Perception/physiology , Evoked Potentials, Auditory/physiology , Time Factors
3.
JASA Express Lett ; 2(12): 124403, 2022 12.
Article in English | MEDLINE | ID: mdl-36586958

ABSTRACT

Active echolocation of sighted humans using predefined synthetic and self-emitted sounds, as habitually used by blind individuals, was investigated. Using virtual acoustics, distance estimation and directional localization of a wall in different rooms were assessed. A virtual source was attached to either the head or hand with realistic or increased source directivity. A control condition was tested with a virtual sound source located at the wall. Untrained echolocation performance comparable to performance in the control condition was achieved on an individual level. On average, the echolocation performance was considerably lower than in the control condition, however, it benefitted from increased directivity.


Subject(s)
Echolocation , Sound Localization , Animals , Humans , Sound , Acoustics , Hand
4.
Front Psychol ; 13: 994047, 2022.
Article in English | MEDLINE | ID: mdl-36507051

ABSTRACT

Every-day acoustical environments are often complex, typically comprising one attended target sound in the presence of interfering sounds (e.g., disturbing conversations) and reverberation. Here we assessed binaural detection thresholds and (supra-threshold) binaural audio quality ratings of four distortions types: spectral ripples, non-linear saturation, intensity and spatial modifications applied to speech, guitar, and noise targets in such complex acoustic environments (CAEs). The target and (up to) two masker sounds were either co-located as if contained in a common audio stream, or were spatially separated as if originating from different sound sources. The amount of reverberation was systematically varied. Masker and reverberation had a significant effect on the distortion-detection thresholds of speech signals. Quality ratings were affected by reverberation, whereas the effect of maskers depended on the distortion. The results suggest that detection thresholds and quality ratings for distorted speech in anechoic conditions are also valid for rooms with mild reverberation, but not for moderate reverberation. Furthermore, for spectral ripples, a significant relationship between the listeners' individual detection thresholds and quality ratings was found. The current results provide baseline data for detection thresholds and audio quality ratings of different distortions of a target sound in CAEs, supporting the future development of binaural auditory models.

5.
J Acoust Soc Am ; 152(3): 1767, 2022 09.
Article in English | MEDLINE | ID: mdl-36182293

ABSTRACT

Awareness of space, and subsequent orientation and navigation in rooms, is dominated by the visual system. However, humans are able to extract auditory information about their surroundings from early reflections and reverberation in enclosed spaces. To better understand orientation and navigation based on acoustic cues only, three virtual corridor layouts (I-, U-, and Z-shaped) were presented using real-time virtual acoustics in a three-dimensional 86-channel loudspeaker array. Participants were seated on a rotating chair in the center of the loudspeaker array and navigated using real rotation and virtual locomotion by "teleporting" in steps on a grid in the invisible environment. A head mounted display showed control elements and the environment in a visual reference condition. Acoustical information about the environment originated from a virtual sound source at the collision point of a virtual ray with the boundaries. In different control modes, the ray was cast either in view or hand direction or in a rotating, "radar"-like fashion in 90° steps to all sides. Time to complete, number of collisions, and movement patterns were evaluated. Navigation and orientation were possible based on the direct sound with little effect of room acoustics and control mode. Underlying acoustic cues were analyzed using an auditory model.


Subject(s)
Sound Localization , Acoustics , Cues , Humans , Sound
6.
JASA Express Lett ; 2(9): 092401, 2022 Sep.
Article in English | MEDLINE | ID: mdl-36182340

ABSTRACT

Diffraction of sound occurs at sound barriers, building and room corners in urban and indoor environments. Here, a unified parametric filter representation of the singly diffracted field at arbitrary wedges is suggested, connecting existing asymptotic and exact solutions in the framework of geometrical acoustics. Depending on the underlying asymptotic (high-frequency) solution, a combination of up to four half-order lowpass filters represents the diffracted field. Compact transfer function and impulse response expressions are proposed, providing errors below ±0.1 dB. To approximate the exact solution, a further asymptotic lowpass filter valid at low frequencies is suggested and combined with the high-frequency filter.

7.
Front Neurosci ; 16: 931748, 2022.
Article in English | MEDLINE | ID: mdl-36071716

ABSTRACT

Recent studies on loudness perception of binaural broadband signals in hearing impaired listeners found large individual differences, suggesting the use of such signals in hearing aid fitting. Likewise, clinical cochlear implant (CI) fitting with narrowband/single-electrode signals might cause suboptimal loudness perception in bilateral and bimodal CI listeners. Here spectral and binaural loudness summation in normal hearing (NH) listeners, bilateral CI (biCI) users, and unilateral CI (uCI) users with normal hearing in the unaided ear was investigated to assess the relevance of binaural/bilateral fitting in CI users. To compare the three groups, categorical loudness scaling was performed for an equal categorical loudness noise (ECLN) consisting of the sum of six spectrally separated third-octave noises at equal loudness. The acoustical ECLN procedure was adapted to an equivalent procedure in the electrical domain using direct stimulation. To ensure the same broadband loudness in binaural measurements with simultaneous electrical and acoustical stimulation, a modified binaural ECLN was introduced and cross validated with self-adjusted loudness in a loudness balancing experiment. Results showed a higher (spectral) loudness summation of the six equally loud narrowband signals in the ECLN in CI compared to NH. Binaural loudness summation was found for all three listener groups (NH, uCI, and biCI). No increased binaural loudness summation could be found for the current uCI and biCI listeners compared to the NH group. In uCI loudness balancing between narrowband signals and single electrodes did not automatically result in a balanced loudness perception across ears, emphasizing the importance of binaural/bilateral fitting.

8.
J Acoust Soc Am ; 151(6): 3927, 2022 06.
Article in English | MEDLINE | ID: mdl-35778173

ABSTRACT

Differences in interaural phase configuration between a target and a masker can lead to substantial binaural unmasking. This effect is decreased for masking noises with an interaural time difference (ITD). Adding a second noise with an opposing ITD in most cases further reduces binaural unmasking. Thus far, modeling of these detection thresholds required both a mechanism for internal ITD compensation and an increased filter bandwidth. An alternative explanation for the reduction is that unmasking is impaired by the lower interaural coherence in off-frequency regions caused by the second masker [Marquardt and McAlpine (2009). J. Acoust. Soc. Am. 126(6), EL177-EL182]. Based on this hypothesis, the current work proposes a quantitative multi-channel model using monaurally derived peripheral filter bandwidths and an across-channel incoherence interference mechanism. This mechanism differs from wider filters since it has no effect when the masker coherence is constant across frequency bands. Combined with a monaural energy discrimination pathway, the model predicts the differences between a single delayed noise and two opposingly delayed noises as well as four other data sets. It helps resolve the inconsistency that simulating some data requires wide filters while others require narrow filters.


Subject(s)
Noise , Perceptual Masking , Auditory Threshold , Noise/adverse effects
9.
Int J Audiol ; 61(11): 965-974, 2022 Nov.
Article in English | MEDLINE | ID: mdl-34612124

ABSTRACT

OBJECTIVE: This study investigated if individual preferences with respect to the trade-off between a good signal-to-noise ratio and a distortion-free speech target were stable across different masking conditions and if simple adjustment methods could be used to identify subjects as either "noise haters" or "distortions haters". DESIGN: In each masking condition, subjects could adjust the target speech level according to their preferences by employing (i) linear gain or gain at the cost of (ii) clipping distortions or (iii) compression distortions. The comparison of these processing conditions allowed investigating the preferred trade-off between distortions and noise disturbance. STUDY SAMPLE: Thirty subjects differing widely in hearing status (normal-hearing to moderately impaired) and age (23-85 years). RESULTS: High test-retest stability of individual preferences was found for all modification schemes. The preference adjustments suggested that subjects could be consistently categorised along a scale from "noise haters" to "distortion haters", and this preference trait remained stable through all maskers, spatial conditions, and types of distortions. CONCLUSIONS: Employing quick self-adjustment to collect listening preferences in complex listening conditions revealed a stable preference trait along the "noise vs. distortions" tolerance dimension. This could potentially help in fitting modern hearing aid algorithms to the individual user.


Subject(s)
Hearing Aids , Hearing Loss, Sensorineural , Speech Perception , Humans , Young Adult , Adult , Middle Aged , Aged , Aged, 80 and over , Hearing , Hearing Tests , Noise/adverse effects , Hearing Loss, Sensorineural/rehabilitation
11.
Trends Hear ; 25: 23312165211054924, 2021.
Article in English | MEDLINE | ID: mdl-34935544

ABSTRACT

Late reverberation involves the superposition of many sound reflections, approaching the properties of a diffuse sound field. Since the spatially resolved perception of individual late reflections is impossible, simplifications can potentially be made for modelling late reverberation in room acoustics simulations with reduced spatial resolution. Such simplifications are desired for interactive, real-time virtual acoustic environments with applications in hearing research and for the evaluation of hearing supportive devices. In this context, the number and spatial arrangement of loudspeakers used for playback additionally affect spatial resolution. The current study assessed the minimum number of spatially evenly distributed virtual late reverberation sources required to perceptually approximate spatially highly resolved isotropic and anisotropic late reverberation and to technically approximate a spherically isotropic sound field. The spatial resolution of the rendering was systematically reduced by using subsets of the loudspeakers of an 86-channel spherical loudspeaker array in an anechoic chamber, onto which virtual reverberation sources were mapped using vector base amplitude panning. It was tested whether listeners can distinguish lower spatial resolutions of reproduction of late reverberation from the highest achievable spatial resolution in different simulated rooms. The rendering of early reflections remained unchanged. The coherence of the sound field across a pair of microphones at ear and behind-the-ear hearing device distance was assessed to separate the effects of number of virtual sources and loudspeaker array geometry. Results show that between 12 and 24 reverberation sources are required for the rendering of late reverberation in virtual acoustic environments.


Subject(s)
Sound Localization , Speech Perception , Acoustic Stimulation , Acoustics , Humans , Sound
12.
Front Psychol ; 12: 634943, 2021.
Article in English | MEDLINE | ID: mdl-34239474

ABSTRACT

The individual loudness perception of a patient plays an important role in hearing aid satisfaction and use in daily life. Hearing aid fitting and development might benefit from individualized loudness models (ILMs), enabling better adaptation of the processing to individual needs. The central question is whether additional parameters are required for ILMs beyond non-linear cochlear gain loss and linear attenuation common to existing loudness models for the hearing impaired (HI). Here, loudness perception in eight normal hearing (NH) and eight HI listeners was measured in conditions ranging from monaural narrowband to binaural broadband, to systematically assess spectral and binaural loudness summation and their interdependence. A binaural summation stage was devised with empirical monaural loudness judgments serving as input. While NH showed binaural inhibition in line with the literature, binaural summation and its inter-subject variability were increased in HI, indicating the necessity for individualized binaural summation. Toward ILMs, a recent monaural loudness model was extended with the suggested binaural stage, and the number and type of additional parameters required to describe and to predict individual loudness were assessed. In addition to one parameter for the individual amount of binaural summation, a bandwidth-dependent monaural parameter was required to successfully account for individual spectral summation.

13.
J Acoust Soc Am ; 149(4): 2255, 2021 04.
Article in English | MEDLINE | ID: mdl-33940902

ABSTRACT

Sound radiation of most natural sources, like human speakers or musical instruments, typically exhibits a spatial directivity pattern. This directivity contributes to the perception of sound sources in rooms, affecting the spatial energy distribution of early reflections and late diffuse reverberation. Thus, for convincing sound field reproduction and acoustics simulation, source directivity has to be considered. Whereas perceptual effects of directivity, such as source-orientation-dependent coloration, appear relevant for the direct sound and individual early reflections, it is unclear how spectral and spatial cues interact for later reflections. Better knowledge of the perceptual relevance of source orientation cues might help to simplify the acoustics simulation. Here, it is assessed as to what extent directivity of a human speaker should be simulated for early reflections and diffuse reverberation. The computationally efficient hybrid approach to simulate and auralize binaural room impulse responses [Wendt et al., J. Audio Eng. Soc. 62, 11 (2014)] was extended to simulate source directivity. Two psychoacoustic experiments assessed the listeners' ability to distinguish between different virtual source orientations when the frequency-dependent spatial directivity pattern of the source was approximated by a direction-independent average filter for different higher reflection orders. The results indicate that it is sufficient to simulate effects of source directivity in the first-order reflections.


Subject(s)
Sound Localization , Sound , Acoustics , Cues , Humans , Perception , Psychoacoustics
14.
Trends Hear ; 25: 23312165211001219, 2021.
Article in English | MEDLINE | ID: mdl-33739186

ABSTRACT

Smart headphones or hearables use different types of algorithms such as noise cancelation, feedback suppression, and sound pressure equalization to eliminate undesired sound sources or to achieve acoustical transparency. Such signal processing strategies might alter the spectral composition or interaural differences of the original sound, which might be perceived by listeners as monaural or binaural distortions and thus degrade audio quality. To evaluate the perceptual impact of these distortions, subjective quality ratings can be used, but these are time consuming and costly. Auditory-inspired instrumental quality measures can be applied with less effort and may also be helpful in identifying whether the distortions impair the auditory representation of monaural or binaural cues. Therefore, the goals of this study were (a) to assess the applicability of various monaural and binaural audio quality models to distortions typically occurring in hearables and (b) to examine the effect of those distortions on the auditory representation of spectral, temporal, and binaural cues. Results showed that the signal processing algorithms considered in this study mainly impaired (monaural) spectral cues. Consequently, monaural audio quality models that capture spectral distortions achieved the best prediction performance. A recent audio quality model that predicts monaural and binaural aspects of quality was revised based on parts of the current data involving binaural audio quality aspects, leading to improved overall performance indicated by a mean Pearson linear correlation of 0.89 between obtained and predicted ratings.


Subject(s)
Cues , Sound Localization , Acoustic Stimulation , Algorithms , Humans , Noise , Technology
15.
J Acoust Soc Am ; 147(3): 1379, 2020 03.
Article in English | MEDLINE | ID: mdl-32237817

ABSTRACT

This study examined how well individual speech recognition thresholds in complex listening scenarios could be predicted by a current binaural speech intelligibility model. Model predictions were compared with experimental data measured for seven normal-hearing and 23 hearing-impaired listeners who differed widely in their degree of hearing loss, age, as well as performance in clinical speech tests. The experimental conditions included two masker types (multi-talker or two-talker maskers), and two spatial conditions (maskers co-located with the frontal target or symmetrically separated from the target). The results showed that interindividual variability could not be well predicted by a model including only individual audiograms. Predictions improved when an additional individual "proficiency factor" was derived from one of the experimental conditions or a standard speech test. Overall, the current model can predict individual performance relatively well (except in conditions high in informational masking), but the inclusion of age-related factors may lead to even further improvements.


Subject(s)
Perceptual Masking , Speech Perception , Auditory Perception , Auditory Threshold , Hearing Tests , Speech Intelligibility
16.
Eur J Neurosci ; 51(5): 1265-1278, 2020 03.
Article in English | MEDLINE | ID: mdl-29368797

ABSTRACT

A model using temporal-envelope cues was previously developed to explain perceptual interference effects between amplitude modulation and frequency modulation (FM). As that model could not accurately predict FM sensitivity and the interference effects, temporal fine structure (TFS) cues were added to the model. Thus, following the initial stage of the model consisting of a linear filter bank simulating cochlear filtering, processing was split into an 'envelope path' based on envelope power cues and a 'TFS path' based on a measure of the distribution of time intervals between successive zero-crossings. This yielded independent detectability indices for envelope and TFS cues, which were optimally combined to produce a single decision statistic. Independent internal noises in the envelope and TFS paths were adjusted to match the data. Simulations indicate that TFS cues are required to account for FM data for young normal-hearing listeners and that TFS processing is impaired for both older normal-hearing and hearing-impaired listeners. The role of TFS was further assessed by relating the monaural FM sensitivity to measures of interaural phase difference, commonly assumed to rely on binaural TFS sensitivity. The model demonstrates that binaural TFS sensitivity is considerably lower than monaural TFS sensitivity. Similar to FM thresholds, interaural phase difference sensitivity declined with age and hearing loss, although higher degradations were observed in binaural temporal processing compared to monaural processing. Overall, this model provides a novel tool to explore the mechanisms involved in FM processing in the normal auditory system and the degradations in FM sensitivity with ageing and hearing loss.


Subject(s)
Cues , Hearing Loss , Acoustic Stimulation , Aging , Auditory Threshold , Cochlea , Humans , Noise
17.
J Acoust Soc Am ; 146(4): 2188, 2019 10.
Article in English | MEDLINE | ID: mdl-31671969

ABSTRACT

In daily life, speech intelligibility is affected by masking caused by interferers and by reverberation. For a frontal target speaker and two interfering sources symmetrically placed to either side, spatial release from masking (SRM) is observed in comparison to frontal interferers. In this case, the auditory system can make use of temporally fluctuating interaural time/phase and level differences promoting binaural unmasking (BU) and better-ear glimpsing (BEG). Reverberation affects the waveforms of the target and maskers, and the interaural differences, depending on the spatial configuration and on the room acoustical properties. In this study, the effect of room acoustics, temporal structure of the interferers, and target-masker positions on speech reception thresholds and SRM was assessed. The results were compared to an optimal better-ear glimpsing strategy to help disentangle energetic masking including effects of BU and BEG as well as informational masking (IM). In anechoic and moderate reverberant conditions, BU and BEG contributed to SRM of fluctuating speech-like maskers, while BU did not contribute in highly reverberant conditions. In highly reverberant rooms a SRM of up to 3 dB was observed for speech maskers, including effects of release from IM based on binaural cues.


Subject(s)
Acoustics , Perceptual Masking , Speech Intelligibility , Speech Perception , Speech Reception Threshold Test , Acoustic Stimulation , Adult , Environment , Humans , Young Adult
18.
J Acoust Soc Am ; 146(3): 1732, 2019 09.
Article in English | MEDLINE | ID: mdl-31590539

ABSTRACT

Limited abilities to localize sound sources and other reduced spatial hearing capabilities remain a largely unsolved issue in hearing devices like hearing aids or hear-through headphones. Hence, the impact of the microphone location, signal bandwidth, different equalization approaches, as well as processing delays in superposition with direct sound leaking through a vent was addressed in this study. A localization experiment was performed with normal-hearing subjects using individual binaural synthesis to separately assess the above-mentioned potential limiting issues for localization in the horizontal and vertical plane with linear hearing devices. To this end, listening through hearing devices was simulated utilizing transfer functions for six different microphone locations, measured both individually and on a dummy head. Results show that the microphone location is the governing factor for localization abilities with linear hearing devices, and non-optimal microphone locations have a disruptive influence on localization in the vertical domain, and an effect on lateral sound localization. Processing delays cause additional detrimental effects for lateral sound localization; and diffuse-field equalization to the open-ear response leads to better localization performance than free-field equalization. Stimuli derived from dummy head measurements are unsuited for evaluating individual localization abilities with a hearing device.


Subject(s)
Hearing Aids/standards , Sound Localization , Adult , Female , Hearing Aids/adverse effects , Humans , Male
19.
J Acoust Soc Am ; 145(3): EL229, 2019 03.
Article in English | MEDLINE | ID: mdl-31067971

ABSTRACT

Humans possess mechanisms to suppress distracting early sound reflections, summarized as the precedence effect. Recent work shows that precedence is affected by visual stimulation. This paper investigates possible effects of visual stimulation on the perception of later reflections, i.e., reverberation. In a highly immersive audio-visual virtual reality environment, subjects were asked to quantify reverberation in conditions where simultaneously presented auditory and visual stimuli either match in room identity, sound source azimuth, and sound source distance, or diverge in one of these aspects. While subjects reliably judged reverberation across acoustic environments, the visual room impression did not affect reverberation estimates.

20.
J Acoust Soc Am ; 144(4): 2072, 2018 10.
Article in English | MEDLINE | ID: mdl-30404454

ABSTRACT

Spatial hearing abilities with hearing devices ultimately depend on how well acoustic directional cues are captured by the microphone(s) of the device. A comprehensive objective evaluation of monaural spectral directional cues captured at 9 microphone locations integrated in 5 hearing device styles is presented, utilizing a recent database of head-related transfer functions (HRTFs) that includes data from 16 human and 3 artificial ear pairs. Differences between HRTFs to the eardrum and hearing device microphones were assessed by descriptive analyses and quantitative metrics, and compared to differences between individual ears. Directional information exploited for vertical sound localization was evaluated by means of computational models. Directional information at microphone locations inside the pinna is significantly biased and qualitatively poorer compared to locations in the ear canal; behind-the-ear microphones capture almost no directional cues. These errors are expected to impair vertical sound localization, even if the new cues would be optimally mapped to locations. Differences between HRTFs to the eardrum and hearing device microphones are qualitatively different from between-subject differences and can be described as a partial destruction rather than an alteration of relevant cues, although spectral difference metrics produce similar results. Dummy heads do not fully reflect the results with individual subjects.


Subject(s)
Cues , Ear/physiology , Hearing Aids/standards , Sound Localization , Humans , Models, Theoretical
SELECTION OF CITATIONS
SEARCH DETAIL
...