Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 37
Filter
Add more filters










Publication year range
1.
Front Neurosci ; 16: 958577, 2022.
Article in English | MEDLINE | ID: mdl-36117637

ABSTRACT

Visual capture describes the tendency of a sound to be mislocalized to the location of a plausible visual target. This effect, also known as the ventriloquist effect, has been extensively studied in humans, but primarily for mismatches in the angular direction between auditory and visual targets. Here, visual capture was examined in the distance dimension using a single visual target (an un-energized loudspeaker) and invisible virtual sound sources presented over headphones. The sound sources were synthesized from binaural impulse-response measurements at distances ranging from 1 to 5 m (0.25 m steps) in the semi-reverberant room (7.7 × 4.2 × 2.7 m3) in which the experiment was conducted. Listeners (n = 11) were asked whether or not the auditory target appeared to be at the same distance as the visual target. Within a block of trials, the visual target was placed at a fixed distance of 1.5, 3, or 4.5 m, and the auditory target varied randomly from trial-to-trial over the sample of measurement distances. The resulting psychometric functions were generally consistent with visual capture in distance, but the capture was asymmetric: Sound sources behind the visual target were more strongly captured than sources in front of the visual target. This asymmetry is consistent with previous reports in the literature, and is shown here to be well predicted by a simple model of sensory integration and decision in which perceived auditory space is compressed logarithmically in distance and has lower resolution than perceived visual space.

2.
J Acoust Soc Am ; 151(6): 3729, 2022 06.
Article in English | MEDLINE | ID: mdl-35778188

ABSTRACT

Known errors exist in loudspeaker array processing techniques, often degrading source localization and timbre. The goal of the present study was to use virtual loudspeaker arrays to investigate how treatment of the interaural time delay (ITD) cue from each loudspeaker impacts these errors. Virtual loudspeaker arrays rendered over headphones using head-related impulse responses (HRIRs) allow flexible control of array size. Here, three HRIR delay treatment strategies were evaluated using minimum-phase loudspeaker HRIRs: reapplying the original HRIR delays, applying the relative ITD to the contralateral ear, or separately applying the HRIR delays prior to virtual array processing. Seven array sizes were simulated, and panning techniques were used to estimate HRIRs from 3000 directions using higher-order Ambisonics, vector-base amplitude panning, and the closest loudspeaker technique. Compared to a traditional, physical array, the prior HRIR delay treatment strategy produced similar errors with a 95% reduction in the required array size. When compared to direct spherical harmonic (SH) fitting of head-related transfer functions (HRTFs), the prior delays strategy reduced errors in reconstruction accuracy of timbral and directional psychoacoustic cues. This result suggests that delay optimization can greatly reduce the number of virtual loudspeakers required for accurate rendering of acoustic scenes without SH-based HRTF representation.


Subject(s)
Acoustics , Cues , Acoustic Stimulation , Psychoacoustics
3.
Ear Hear ; 43(4): 1139-1150, 2022.
Article in English | MEDLINE | ID: mdl-34799495

ABSTRACT

OBJECTIVES: The primary goal of this study was to investigate the effects of reverberation on Mandarin tone and vowel recognition of cochlear implant (CI) users and normal-hearing (NH) listeners. To understand the performance of Mandarin tone recognition, this study also measured participants' pitch perception and the availability of temporal envelope cues in reverberation. DESIGN: Fifteen CI users and nine NH listeners, all Mandarin speakers, were asked to recognize Mandarin single-vowels produced in four lexical tones and rank harmonic complex tones in pitch with different reverberation times (RTs) from 0 to 1 second. Virtual acoustic techniques were used to simulate rooms with different degrees of reverberation. Vowel duration and correlation between amplitude envelope and fundamental frequency (F0) contour were analyzed for different tones as a function of the RT. RESULTS: Vowel durations of different tones significantly increased with longer RTs. Amplitude-F0 correlation remained similar for the falling Tone 4 but greatly decreased for the other tones in reverberation. NH listeners had robust pitch-ranking, tone recognition, and vowel recognition performance as the RT increased. Reverberation significantly degraded CI users' pitch-ranking thresholds but did not significantly affect the overall scores of tone and vowel recognition with CIs. Detailed analyses of tone confusion matrices showed that CI users reduced the flat Tone-1 responses but increased the falling Tone-4 responses in reverberation, possibly due to the falling amplitude envelope of late reflections after the original vowel segment. CI users' tone recognition scores were not correlated with their pitch-ranking thresholds. CONCLUSIONS: NH listeners can reliably recognize Mandarin tones in reverberation using salient pitch cues from spectral and temporal fine structures. However, CI users have poorer pitch perception using F0-related amplitude modulations that are reduced in reverberation. Reverberation distorts speech amplitude envelopes, which affect the distribution of tone responses but not the accuracy of tone recognition with CIs. Recognition of vowels with stationary formant trajectories is not affected by reverberation for both NH listeners and CI users, regardless of the available spectral resolution. Future studies should test how the relatively stable vowel and tone recognition may contribute to sentence recognition in reverberation of Mandarin-speaking CI users.


Subject(s)
Cochlear Implantation , Cochlear Implants , Deafness , Speech Perception , Deafness/rehabilitation , Humans , Pitch Perception/physiology , Speech Perception/physiology
5.
Hear Res ; 409: 108316, 2021 09 15.
Article in English | MEDLINE | ID: mdl-34340021

ABSTRACT

Previous work has explored novel binaural combinations of reverberation and the resulting perceived reverberation strength (reverberance). The present study examines the perceptual effects of additional binaural combinations of reverberation with the goal of explaining reverberance in terms of basic psychoacoustic principles. Stimuli were generated using virtual space techniques simulating a speech source 3 m to the listener's right in a moderately reverberant environment. Reverberant energy at the ears was varied systematically relative to the natural level for the environment (0-dB gain). The method of magnitude estimation was used to estimate reverberance. Four experiments were conducted. Experiment 1 tested monaural listening conditions for both left and right ears at reverberation gains from -21 dB to 0 dB. Experiment 2 tested a binaural listening condition where only reverberant energy at the ear farther from the source was manipulated (-21 dB to 0 dB). Experiment 3 tested two binaural conditions over a wider range of reverberation gains (-18 dB to +24 dB). In one condition, reverberant energy was manipulated for both ears equally. In the other condition, reverberant energy was manipulated only for the ear nearer the source. In Experiment 4, reverberant tails of the stimuli were removed to test whether listeners were able to use ongoing reverberant information to judge reverberance. The results from all experiments were found to be well predicted by a model of time-varying binaural loudness that focused on "glimpses" in time with relatively high reverberant sound energy and low direct sound energy. These findings suggest that the mechanisms underlying reverberance and loudness may be similar.


Subject(s)
Hearing Tests , Speech Perception , Auditory Perception , Hearing , Humans , Psychoacoustics , Sound
6.
Hear Res ; 392: 107982, 2020 07.
Article in English | MEDLINE | ID: mdl-32454368

ABSTRACT

It has been hypothesized that noise-induced cochlear synaptopathy in humans may result in functional deficits such as a weakened middle ear muscle reflex (MEMR) and degraded speech perception in complex environments. Although relationships between noise-induced synaptic loss and the MEMR have been demonstrated in animals, effects of noise exposure on the MEMR have not been observed in humans. The hypothesized relationship between noise exposure and speech perception has also been difficult to demonstrate conclusively. Given that the MEMR is engaged at high sound levels, relationships between speech recognition in complex listening environments and noise exposure might be more evident at high speech presentation levels. In this exploratory study with 41 audiometrically normal listeners, a combination of behavioral and physiologic measures thought to be sensitive to synaptopathy were used to determine potential links with speech recognition at high presentation levels. We found decreasing speech recognition as a function of presentation level (from 74 to 104 dBA), which was associated with reduced MEMR magnitude. We also found that reduced MEMR magnitude was associated with higher estimated lifetime noise exposure. Together, these results suggest that the MEMR may be sensitive to noise-induced synaptopathy in humans, and this may underlie functional speech recognition deficits at high sound levels.


Subject(s)
Ear, Middle/innervation , Hearing Loss, Noise-Induced/psychology , Hearing , Noise/adverse effects , Recognition, Psychology , Reflex , Speech Intelligibility , Speech Perception , Acoustic Stimulation , Adult , Audiometry, Pure-Tone , Cognition , Comprehension , Female , Hearing Loss, Noise-Induced/etiology , Hearing Loss, Noise-Induced/physiopathology , Humans , Male , Middle Aged , Trail Making Test , Young Adult
7.
J Am Acad Audiol ; 31(1): 17-29, 2020 01.
Article in English | MEDLINE | ID: mdl-31267958

ABSTRACT

BACKGROUND: Digital noise reduction (DNR) processing is used in hearing aids to enhance perception in noise by classifying and suppressing the noise acoustics. However, the efficacy of DNR processing is not known under reverberant conditions where the speech-in-noise acoustics are further degraded by reverberation. PURPOSE: The purpose of this study was to investigate acoustic and perceptual effects of DNR processing across a range of reverberant conditions for individuals with hearing impairment. RESEARCH DESIGN: This study used an experimental design to investigate the effects of varying reverberation on speech-in-noise processed with DNR. STUDY SAMPLE: Twenty-six listeners with mild-to-moderate sensorineural hearing impairment participated in the study. DATA COLLECTION AND ANALYSIS: Speech stimuli were combined with unmodulated broadband noise at several signal-to-noise ratios (SNRs). A range of reverberant conditions with realistic parameters were simulated, as well as an anechoic control condition without reverberation. Reverberant speech-in-noise signals were processed using a spectral subtraction DNR simulation. Signals were acoustically analyzed using a phase inversion technique to quantify improvement in SNR as a result of DNR processing. Sentence intelligibility and subjective ratings of listening effort, speech naturalness, and background noise comfort were examined with and without DNR processing across the conditions. RESULTS: Improvement in SNR was greatest in the anechoic control condition and decreased as the ratio of direct to reverberant energy decreased. There was no significant effect of DNR processing on speech intelligibility in the anechoic control condition, but there was a significant decrease in speech intelligibility with DNR processing in all of the reverberant conditions. Subjectively, listeners reported greater listening effort and lower speech naturalness with DNR processing in some of the reverberant conditions. Listeners reported higher background noise comfort with DNR processing only in the anechoic control condition. CONCLUSIONS: Results suggest that reverberation affects DNR processing using a spectral subtraction algorithm in such a way that decreases the ability of DNR to reduce noise without distorting the speech acoustics. Overall, DNR processing may be most beneficial in environments with little reverberation and that the use of DNR processing in highly reverberant environments may actually produce adverse perceptual effects. Further research is warranted using commercial hearing aids in realistic reverberant environments.


Subject(s)
Hearing Aids , Hearing Loss, Sensorineural/rehabilitation , Noise , Signal-To-Noise Ratio , Speech Perception , Aged , Aged, 80 and over , Analysis of Variance , Female , Humans , Male , Speech Acoustics
8.
Trends Hear ; 23: 2331216519864499, 2019.
Article in English | MEDLINE | ID: mdl-31455167

ABSTRACT

Interaural phase difference (IPD) discrimination upper frequency limits and just-noticeable differences (JNDs), interaural level difference (ILD) JNDs, and diotic intensity JNDs were measured for 20 older hearing-impaired listeners with matched moderate sloping to severe sensorineural hearing losses. The JNDs were measured using tone stimuli at 500 Hz. In addition to these auditory tests, the participants completed a cognitive test (Trail Making Test). Significant performance improvements in IPD discrimination were observed across test sessions. Strong correlations were found between IPD and ILD discrimination performance. Very strong correlations were observed between IPD discrimination and Trail Making performance as well as strong correlations between ILD discrimination and Trail Making performance. These relationships indicate that interindividual variability in IPD discrimination performance did not exclusively reflect deficits specific to any auditory processing, including early auditory processing of temporal information. The observed relationships between spatial audition and cognition may instead be attributable to a modality-general spatial processing deficit and/or individual differences in global processing speed.


Subject(s)
Auditory Perception/physiology , Hearing Loss, Sensorineural/physiopathology , Adult , Age Factors , Aged , Auditory Threshold , Cognition/physiology , Female , Hearing , Hearing Loss , Hearing Tests , Humans , Male
9.
Hear Res ; 379: 52-58, 2019 08.
Article in English | MEDLINE | ID: mdl-31075611

ABSTRACT

As direct-to-reverberant energy ratio (DRR) decreases or decay time increases, speech intelligibility tends to decrease for both normal-hearing and hearing-impaired listeners. Given this relationship, it is easy to assume that perceived reverberation (reverberance) would act as an intermediary-as physical reverberation increases, so does reverberance, and speech intelligibility decreases as a result. This assumption has not been tested explicitly. Two experiments were conducted to test this hypothesis. In Experiment 1, listeners performed a magnitude estimation task, reporting reverberance for speech stimuli that were convolved with impulse responses whose reverberant properties were manipulated. Listeners reported a decrease in reverberance when the DRR was increased at both ears (Natural Room condition), but not when it was increased at only the ear nearest the source (Hybrid condition). In Experiment 2, listeners performed a speech intelligibility task wherein noise-masked speech was convolved with a subset of the impulse responses from Experiment 1. As predicted by the speech transmission index (STI), speech intelligibility was good in cases where at least one ear received non-reverberant speech, including the Hybrid listening condition in Experiment 1. Thus, the Hybrid listening condition resulted simultaneously in high reverberance (Exp. 1) and high speech intelligibility (Exp. 2), demonstrating that reverberance and speech intelligibility can be dissociated.


Subject(s)
Speech Intelligibility/physiology , Speech Perception/physiology , Acoustic Stimulation , Adolescent , Female , Functional Laterality/physiology , Healthy Volunteers , Humans , Male , Noise , Perceptual Masking/physiology , Speech Acoustics , Young Adult
10.
Ear Hear ; 40(5): 1098-1105, 2019.
Article in English | MEDLINE | ID: mdl-31025984

ABSTRACT

OBJECTIVES: Previous study has suggested that when listening in modulated noise, individuals benefit from different wide dynamic range compression (WDRC) speeds depending on their working memory ability. Reverberation reduces the modulation depth of signals and may impact the relation between WDRC speed and working memory. The purpose of this study was to examine this relation across a range of reverberant conditions. DESIGN: Twenty-eight older listeners with mild-to-moderate sensorineural hearing impairment were recruited in the present study. Individual working memory was measured using a Reading Span test. Sentences were combined with noise at two signal to noise ratios (2 and 5 dB SNR), and reverberation was simulated at a range of reverberation times (0.00, 0.75, 1.50, and 3.00 sec). Speech intelligibility was measured in listeners when listening to the sentences processed with simulated fast-acting and slow-acting WDRC conditions. RESULTS: There was a significant relation between WDRC speed and working memory with minimal or no reverberation. Consistent with previous research, this relation was such that individuals with high working memory had higher speech intelligibility with fast-acting WDRC, and individuals with low working memory performed better with slow-acting WDRC. However, at longer reverberation times, there was no relation between WDRC speed and working memory. CONCLUSIONS: Consistent with previous studies, results suggest that there is an advantage of tailoring WDRC speed based on an individual's working memory under anechoic conditions. However, the present results further suggest that there may not be such a benefit in reverberant listening environments due to reduction in signal modulation.


Subject(s)
Hearing Loss, Sensorineural/physiopathology , Memory, Short-Term , Noise , Speech Perception , Aged , Aged, 80 and over , Female , Hearing Loss, Sensorineural/psychology , Humans , Male , Middle Aged , Signal Processing, Computer-Assisted , Signal-To-Noise Ratio
11.
J Acoust Soc Am ; 143(5): 3068, 2018 05.
Article in English | MEDLINE | ID: mdl-29857737

ABSTRACT

It has been demonstrated that prior listening exposure to reverberant environments can improve speech understanding in that environment. Previous studies have shown that the buildup of this effect is brief (less than 1 s) and seems largely to be elicited by exposure to the temporal modulation characteristics of the room environment. Situations that might be expected to cause a disruption in this process have yet to be demonstrated. This study seeks to address this issue by showing what types of changes in the acoustic environment cause a breakdown of the room exposure phenomenon. Using speech carrier phrases featuring sudden changes in the acoustic environment, breakdown in the room exposure effect was observed when there was change in the late reverberation characteristics of the room that signaled a different room environment. Changes in patterns of early reflections within the same room environment did not elicit breakdown. Because the environmental situations that resulted in breakdown also resulted in substantial changes to the broadband temporal modulation characteristic of the signal reaching the ears, results from this study provide additional support for the hypothesis that the room exposure phenomenon is linked to the temporal modulation characteristics of the environment.


Subject(s)
Acoustic Stimulation/methods , Auditory Perception/physiology , Environment , Interior Design and Furnishings , Perceptual Masking/physiology , Speech Intelligibility/physiology , Adolescent , Adult , Female , Humans , Male , Young Adult
12.
J Acoust Soc Am ; 142(1): EL130, 2017 07.
Article in English | MEDLINE | ID: mdl-28764441

ABSTRACT

Wide dynamic range compression (WDRC) processing in hearing aids alters the signal-to-noise ratio (SNR) of a speech-in-noise signal. This effect depends on the modulations of the speech and noise, input SNR, and WDRC speed. The purpose of the present experiment was to examine the change in output SNR caused by the interaction between modulation characteristics and WDRC speed. Two modulation manipulations were examined: (1) reverberation and (2) variation in background talker number. Results indicated that fast-acting WDRC altered SNR more than slow-acting WDRC; however, reverberation reduced this difference. Additionally, less modulated maskers led to poorer output SNRs than modulated maskers.


Subject(s)
Correction of Hearing Impairment/instrumentation , Hearing Aids , Noise/adverse effects , Perceptual Masking , Persons With Hearing Impairments/rehabilitation , Speech Perception , Acoustic Stimulation , Audiometry, Speech , Humans , Motion , Persons With Hearing Impairments/psychology , Signal Processing, Computer-Assisted , Signal-To-Noise Ratio , Sound , Time Factors , Vibration
13.
Hear Res ; 341: 168-178, 2016 11.
Article in English | MEDLINE | ID: mdl-27596251

ABSTRACT

When perceiving speech, listeners compensate for reverberation and stable spectral peaks in the speech signal. Despite natural listening conditions usually adding both reverberation and spectral coloration, these processes have only been studied separately. Reverberation smears spectral peaks across time, which is predicted to increase listeners' compensation for these peaks. This prediction was tested using sentences presented with or without a simulated reverberant sound field. All sentences had a stable spectral peak (added by amplifying frequencies matching the second formant frequency [F2] in the target vowel) before a test vowel varying from /i/ to /u/ in F2 and spectral envelope (tilt). In Experiment 1, listeners demonstrated increased compensation (larger decrease in F2 weights and larger increase in spectral tilt weights for identifying the target vowel) in reverberant speech than in nonreverberant speech. In Experiment 2, increased compensation was shown not to be due to reverberation tails. In Experiment 3, adding a pure tone to nonreverberant speech at the target vowel's F2 frequency increased compensation, revealing that these effects are not specific to reverberation. Results suggest that perceptual adjustment to stable spectral peaks in the listening environment is not affected by their source or cause.


Subject(s)
Auditory Perception , Phonetics , Speech Acoustics , Speech Perception , Calibration , Environment , Humans , Language , Noise , Psychometrics , Regression Analysis , Sound Spectrography , Time Factors
14.
J Acoust Soc Am ; 140(1): 74, 2016 07.
Article in English | MEDLINE | ID: mdl-27475133

ABSTRACT

There is now converging evidence that a brief period of prior listening exposure to a reverberant room can influence speech understanding in that environment. Although the effect appears to depend critically on the amplitude modulation characteristic of the speech signal reaching the ear, the extent to which the effect may be influenced by room acoustics has not been thoroughly evaluated. This study seeks to fill this gap in knowledge by testing the effect of prior listening exposure or listening context on speech understanding in five different simulated sound fields, ranging from anechoic space to a room with broadband reverberation time (T60) of approximately 3 s. Although substantial individual variability in the effect was observed and quantified, the context effect was, on average, strongly room dependent. At threshold, the effect was minimal in anechoic space, increased to a maximum of 3 dB on average in moderate reverberation (T60 = 1 s), and returned to minimal levels again in high reverberation. This interaction suggests that the functional effects of prior listening exposure may be limited to sound fields with moderate reverberation (0.4 ≤ T60 ≤ 1 s).


Subject(s)
Acoustics , Speech Acoustics , Speech Intelligibility , Acoustic Stimulation , Adult , Auditory Threshold , Environment Design , Female , Humans , Male , Perceptual Masking , Psychometrics , Speech Perception , Young Adult
15.
Atten Percept Psychophys ; 78(2): 373-95, 2016 Feb.
Article in English | MEDLINE | ID: mdl-26590050

ABSTRACT

Auditory distance perception plays a major role in spatial awareness, enabling location of objects and avoidance of obstacles in the environment. However, it remains under-researched relative to studies of the directional aspect of sound localization. This review focuses on the following four aspects of auditory distance perception: cue processing, development, consequences of visual and auditory loss, and neurological bases. The several auditory distance cues vary in their effective ranges in peripersonal and extrapersonal space. The primary cues are sound level, reverberation, and frequency. Nonperceptual factors, including the importance of the auditory event to the listener, also can affect perceived distance. Basic internal representations of auditory distance emerge at approximately 6 months of age in humans. Although visual information plays an important role in calibrating auditory space, sensorimotor contingencies can be used for calibration when vision is unavailable. Blind individuals often manifest supranormal abilities to judge relative distance but show a deficit in absolute distance judgments. Following hearing loss, the use of auditory level as a distance cue remains robust, while the reverberation cue becomes less effective. Previous studies have not found evidence that hearing-aid processing affects perceived auditory distance. Studies investigating the brain areas involved in processing different acoustic distance cues are described. Finally, suggestions are given for further research on auditory distance perception, including broader investigation of how background noise and multiple sound sources affect perceived auditory distance for those with sensory loss.


Subject(s)
Auditory Diseases, Central/physiopathology , Auditory Pathways/physiology , Auditory Perception/physiology , Blindness/physiopathology , Cues , Distance Perception/physiology , Hearing Loss/physiopathology , Acoustic Stimulation , Hearing Aids , Humans
16.
J Neurosci ; 35(13): 5360-72, 2015 Apr 01.
Article in English | MEDLINE | ID: mdl-25834060

ABSTRACT

Mechanisms underlying sound source distance localization are not well understood. Here we tested the hypothesis that a novel mechanism can create monaural distance sensitivity: a combination of auditory midbrain neurons' sensitivity to amplitude modulation (AM) depth and distance-dependent loss of AM in reverberation. We used virtual auditory space (VAS) methods for sounds at various distances in anechoic and reverberant environments. Stimulus level was constant across distance. With increasing modulation depth, some rabbit inferior colliculus neurons increased firing rates whereas others decreased. These neurons exhibited monotonic relationships between firing rates and distance for monaurally presented noise when two conditions were met: (1) the sound had AM, and (2) the environment was reverberant. The firing rates as a function of distance remained approximately constant without AM in either environment and, in an anechoic condition, even with AM. We corroborated this finding by reproducing the distance sensitivity using a neural model. We also conducted a human psychophysical study using similar methods. Normal-hearing listeners reported perceived distance in response to monaural 1 octave 4 kHz noise source sounds presented at distances of 35-200 cm. We found parallels between the rabbit neural and human responses. In both, sound distance could be discriminated only if the monaural sound in reverberation had AM. These observations support the hypothesis. When other cues are available (e.g., in binaural hearing), how much the auditory system actually uses the AM as a distance cue remains to be determined.


Subject(s)
Cues , Distance Perception/physiology , Inferior Colliculi/cytology , Inferior Colliculi/physiology , Neurons/physiology , Sound Localization/physiology , Acoustic Stimulation , Action Potentials/physiology , Adolescent , Animals , Female , Humans , Male , Models, Neurological , Rabbits , Young Adult
17.
J Assoc Res Otolaryngol ; 16(2): 255-62, 2015 Apr.
Article in English | MEDLINE | ID: mdl-25595542

ABSTRACT

The acoustical cues and physiological processing mechanisms underlying the perception of the distance of sound sources are not well understood. To understand the relation between physiology and behavior, a first step is to use an animal model to study distance sensitivity. The goal of these experiments was to establish the capacity of the Dutch-belted rabbit to discriminate between sound sources at two distances. Trains of noise bursts were presented from speakers that were located either directly in front of the rabbit or at a 45 ° angle in azimuth. The reference speaker was positioned at distances of 20, 40, and 60 cm from the subject, and the more distant test speaker was systematically moved to determine the smallest difference in distance that could be reliably discriminated by the subject. Noise stimuli had one of three bandwidths: wideband (0.1-10 kHz), low-pass (0.1-3 kHz), or high-pass (3-10 kHz). The mean stimulus level was 60 dB sound pressure level (SPL) at the location of the rabbit's head, and the level was roved over a 12-dB range from trial to trial to reduce the availability of level cues. An operant one-interval two-alternative non-forced choice task was used, with a blocked two-down-one-up tracking procedure to determine the distance discriminability. Rabbits were consistently able to discriminate two distances when they were sufficiently separated. Sensitivity was better when the reference distance was 60 cm at either azimuth (distance ratio = 1.5) and was worse when the reference distance was 20 cm (distance ratio = 2.4 at 0 ° and 1.75 at 45 °).


Subject(s)
Sound Localization/physiology , Animals , Auditory Threshold , Discrimination, Psychological , Female , Noise , Rabbits
18.
Front Psychol ; 5: 1097, 2014.
Article in English | MEDLINE | ID: mdl-25339924

ABSTRACT

Past research has shown that auditory distance estimation improves when listeners are given the opportunity to see all possible sound sources when compared to no visual input. It has also been established that distance estimation is more accurate in vision than in audition. The present study investigates the degree to which auditory distance estimation is improved when matched with a congruent visual stimulus. Virtual sound sources based on binaural room impulse response (BRIR) measurements made from distances ranging from approximately 0.3 to 9.8 m in a concert hall were used as auditory stimuli. Visual stimuli were photographs taken from the participant's perspective at each distance in the impulse response measurement setup presented on a large HDTV monitor. Participants were asked to estimate egocentric distance to the sound source in each of three conditions: auditory only (A), visual only (V), and congruent auditory/visual stimuli (A+V). Each condition was presented within its own block. Sixty-two participants were tested in order to quantify the response variability inherent in auditory distance perception. Distance estimates from both the V and A+V conditions were found to be considerably more accurate and less variable than estimates from the A condition.

19.
J Acoust Soc Am ; 135(6): EL239-45, 2014 Jun.
Article in English | MEDLINE | ID: mdl-24907828

ABSTRACT

The temporal envelope and fine structure of speech make distinct contributions to the perception of speech in normal-hearing listeners, and are differentially affected by room reverberation. Previous work has demonstrated enhanced speech intelligibility in reverberant rooms when prior exposure to the room was provided. Here, the relative contributions of envelope and fine structure cues to this intelligibility enhancement were tested using an open-set speech corpus and virtual auditory space techniques to independently manipulate the speech cues within a simulated room. Intelligibility enhancement was observed only when the envelope was reverberant, indicating that the enhancement is envelope-based.


Subject(s)
Speech Acoustics , Speech Intelligibility , Speech Perception , Acoustic Stimulation , Acoustics , Audiometry, Pure-Tone , Audiometry, Speech , Auditory Threshold , Facility Design and Construction , Humans , Motion , Perceptual Masking , Sound , Time Factors , Vibration
20.
Proc Meet Acoust ; 19(1)2013.
Article in English | MEDLINE | ID: mdl-24163718

ABSTRACT

Previous work [Zahorik et al., POMA, 15, 050002 (2012)] has reported that for both broadband and narrowband noise carrier signals in a simulated reverberant sound field, human sensitivity to amplitude modulation (AM) is higher than would be predicted based on the acoustical modulation transfer function (MTF) of the listening environment. These results may be suggestive of mechanisms that functionally enhance modulation in reverberant listening, although many details of this enhancement effect are unknown. Given recent findings that demonstrate improvements in speech understanding with prior exposure to reverberant listening environments, it is of interest to determine whether listening exposure to a reverberant room might also influence AM detection in the room, and perhaps contribute to the AM enhancement effect. Here, AM detection thresholds were estimated (using an adaptive 2-alternative forced-choice procedure) in each of two listening conditions: one in which consistent listening exposure to a particular room was provided, and a second that intentionally disrupted listening exposure by varying the room from trial-to-trial. Results suggest that consistent prior listening exposure contributes to enhanced AM sensitivity in rooms. [Work supported by the NIH/NIDCD.].

SELECTION OF CITATIONS
SEARCH DETAIL
...