Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 50
Filter
1.
Ear Hear ; 45(4): 969-984, 2024.
Article in English | MEDLINE | ID: mdl-38472134

ABSTRACT

OBJECTIVES: The independence of left and right automatic gain controls (AGCs) used in cochlear implants can distort interaural level differences and thereby compromise dynamic sound source localization. We assessed the degree to which synchronizing left and right AGCs mitigates those difficulties as indicated by listeners' ability to use the changes in interaural level differences that come with head movements to avoid front-back reversals (FBRs). DESIGN: Broadband noise stimuli were presented from one of six equally spaced loudspeakers surrounding the listener. Sound source identification was tested for stimuli presented at 70 dBA (above AGC threshold) for 10 bilateral cochlear implant patients, under conditions where (1) patients remained stationary and (2) free head movements within ±30° were encouraged. These conditions were repeated for both synchronized and independent AGCs. The same conditions were run at 50 dBA, below the AGC threshold, to assess listeners' baseline performance when AGCs were not engaged. In this way, the expected high variability in listener performance could be separated from effects of independent AGCs to reveal the degree to which synchronizing AGCs could restore localization performance to what it was without AGC compression. RESULTS: The mean rate of FBRs was higher for sound stimuli presented at 70 dBA with independent AGCs, both with and without head movements, than at 50 dBA, suggesting that when AGCs were independently engaged they contributed to poorer front-back localization. When listeners remained stationary, synchronizing AGCs did not significantly reduce the rate of FBRs. When AGCs were independent at 70 dBA, head movements did not have a significant effect on the rate of FBRs. Head movements did have a significant group effect on the rate of FBRs at 50 dBA when AGCs were not engaged and at 70 dBA when AGCs were synchronized. Synchronization of AGCs, together with head movements, reduced the rate of FBRs to approximately what it was in the 50-dBA baseline condition. Synchronizing AGCs also had a significant group effect on listeners' overall percent correct localization. CONCLUSIONS: Synchronizing AGCs allowed for listeners to mitigate front-back confusions introduced by unsynchronized AGCs when head motion was permitted, returning individual listener performance to roughly what it was in the 50-dBA baseline condition when AGCs were not engaged. Synchronization of AGCs did not overcome localization deficiencies which were observed when AGCs were not engaged, and which are therefore unrelated to AGC compression.


Subject(s)
Cochlear Implants , Sound Localization , Humans , Middle Aged , Male , Female , Aged , Adult , Cochlear Implantation , Head Movements/physiology , Noise , Aged, 80 and over
2.
J Acoust Soc Am ; 154(5): 2769-2771, 2023 Nov 01.
Article in English | MEDLINE | ID: mdl-37916865

ABSTRACT

Is there evidence that listeners are "confused" about sound-source location when sound sources lie on cones-of-confusion? Two experiments determined whether response times and confidence ratings, as possible indices of "confusion," varied as a function of the frequency of occurrence of cones-of-confusion errors in azimuthal sound-source localization tasks. The results suggest that for sound-source localization judgments on an azimuth plane, there is little evidence that response times or confidence ratings vary with the frequency of occurrence of cones-of-confusion errors, consistent with the assumption that listeners are not "confused" in making sound-source location judgments when sound sources are on an azimuthal cone-of-confusion.

3.
J Acoust Soc Am ; 154(2): 661-670, 2023 08 01.
Article in English | MEDLINE | ID: mdl-37540095

ABSTRACT

Front-back reversals (FBRs) in sound-source localization tasks due to cone-of-confusion errors on the azimuth plane occur with some regularity, and their occurrence is listener-dependent. There are fewer FBRs for wideband, high-frequency sounds than for low-frequency sounds presumably because the sources of low-frequency sounds are localized on the basis of interaural differences (interaural time and level differences), which can lead to ambiguous responses. Spectral cues can aid in determining sound-source locations for wideband, high-frequency sounds, and such spectral cues do not lead to ambiguous responses. However, to what extent spectral features might aid sound-source localization is still not known. This paper explores conditions in which the spectral profile of two-octave wide noise bands, whose sources were localized on the azimuth plane, were randomly varied. The experiment demonstrated that such spectral profile randomization increased FBRs for high-frequency noise bands, presumably because whatever spectral features are used for sound-source localization were no longer as useful for resolving FBRs, and listeners relied on interaural differences for sound-source localization, which led to response ambiguities. Additionally, head rotation decreased FBRs in all cases, even when FBRs increased due to spectral profile randomization. In all cases, the occurrence of FBRs was listener-dependent.


Subject(s)
Cues , Sound Localization , Acoustic Stimulation , Noise/adverse effects , Sound , Sound Localization/physiology , Humans
4.
J Acoust Soc Am ; 152(3): 1804, 2022 09.
Article in English | MEDLINE | ID: mdl-36182280

ABSTRACT

A molecular (trial-by-trial) analysis of data from a cocktail-party, target-talker search task was used to test two general classes of explanations accounting for individual differences in listener performance: cue weighting models for which errors are tied to the speech features talkers have in common with the target and internal noise models for which errors are largely independent of these features. The speech of eight different talkers was played simultaneously over eight different loudspeakers surrounding the listener. The locations of the eight talkers varied at random from trial to trial. The listener's task was to identify the location of a target talker with which they had previously been familiarized. An analysis of the response counts to individual talkers showed predominant confusion with one talker sharing the same fundamental frequency and timbre as the target and, secondarily, other talkers sharing the same timbre. The confusions occurred for a roughly constant 31% of all of the trials for all of the listeners. The remaining errors were uniformly distributed across the remaining talkers and responsible for the large individual differences in performances observed. The results are consistent with a model in which largely stimulus-independent factors (internal noise) are responsible for the wide variation in performance across listeners.


Subject(s)
Speech Perception , Individuality , Noise , Speech , Speech Perception/physiology
5.
Audiol Res ; 12(2): 99-112, 2022 Feb 23.
Article in English | MEDLINE | ID: mdl-35314608

ABSTRACT

Stationary visual targets often become far more salient when they move against an otherwise static background-the so-called "pop out" effect. In two experiments conducted over loudspeakers, we tested for a similar pop-out effect in the auditory domain. Tone-in-noise and noise-in-noise detection thresholds were measured using a 2-up, 1-down adaptive procedure under conditions where target and masker(s) were presented from the same or different locations and when the target was stationary or moved via amplitude-panning. In the first experiment, target tones of 0.5 kHz and 4 kHz were tested, maskers (2-4, depending on the condition) were independent Gaussian noises, and all stimuli were 500-ms duration. In the second experiment, a single pink noise masker (0.3-12 kHz) was presented with a single target at one of four bandwidths (0.3-0.6 kHz, 3-6 kHz, 6-12 kHz, 0.3-12 kHz) under conditions where target and masker were presented from the same or different locations and where the target moved or not. The results of both experiments failed to show a decrease in detection thresholds resulting from movement of the target.

6.
J Acoust Soc Am ; 150(4): R7, 2021 10.
Article in English | MEDLINE | ID: mdl-34717479

ABSTRACT

The Reflections series takes a look back on historical articles from The Journal of the Acoustical Society of America that have had a significant impact on the science and practice of acoustics.


Subject(s)
Acoustics , Hearing Loss , Humans
7.
J Speech Lang Hear Res ; 64(7): 2811-2824, 2021 07 16.
Article in English | MEDLINE | ID: mdl-34100627

ABSTRACT

Purpose For bilaterally implanted patients, the automatic gain control (AGC) in both left and right cochlear implant (CI) processors is usually neither linked nor synchronized. At high AGC compression ratios, this lack of coordination between the two processors can distort interaural level differences, the only useful interaural difference cue available to CI patients. This study assessed the improvement, if any, in the utility of interaural level differences for sound source localization in the frontal hemifield when AGCs were synchronized versus independent and when listeners were stationary versus allowed to move their heads. Method Sound source identification of broadband noise stimuli was tested for seven bilateral CI patients using 13 loudspeakers in the frontal hemifield, under conditions where AGCs were linked and unlinked. For half the conditions, patients remained stationary; in the other half, they were encouraged to rotate or reorient their heads within a range of approximately ± 30° during sound presentation. Results In general, those listeners who already localized reasonably well with independent AGCs gained the least from AGC synchronization, perhaps because there was less room for improvement. Those listeners who performed worst with independent AGCs gained the most from synchronization. All listeners performed as well or better with synchronization than without; however, intersubject variability was high. Head movements had little impact on the effectiveness of synchronization of AGCs. Conclusion Synchronization of AGCs offers one promising strategy for improving localization performance in the frontal hemifield for bilaterally implanted CI patients. Supplemental Material https://doi.org/10.23641/asha.14681412.


Subject(s)
Cochlear Implantation , Cochlear Implants , Sound Localization , Speech Perception , Head Movements , Humans
8.
Ear Hear ; 41(6): 1660-1674, 2020.
Article in English | MEDLINE | ID: mdl-33136640

ABSTRACT

OBJECTIVES: We investigated the ability of single-sided deaf listeners implanted with a cochlear implant (SSD-CI) to (1) determine the front-back and left-right location of sound sources presented from loudspeakers surrounding the listener and (2) use small head rotations to further improve their localization performance. The resulting behavioral data were used for further analyses investigating the value of so-called "monaural" spectral shape cues for front-back sound source localization. DESIGN: Eight SSD-CI patients were tested with their cochlear implant (CI) on and off. Eight normal-hearing (NH) listeners, with one ear plugged during the experiment, and another group of eight NH listeners, with neither ear plugged, were also tested. Gaussian noises of 3-sec duration were band-pass filtered to 2-8 kHz and presented from 1 of 6 loudspeakers surrounding the listener, spaced 60° apart. Perceived sound source localization was tested under conditions where the patients faced forward with the head stationary, and under conditions where they rotated their heads between (Equation is included in full-text article.). RESULTS: (1) Under stationary listener conditions, unilaterally-plugged NH listeners and SSD-CI listeners (with their CIs both on and off) were nearly at chance in determining the front-back location of high-frequency sound sources. (2) Allowing rotational head movements improved performance in both the front-back and left-right dimensions for all listeners. (3) For SSD-CI patients with their CI turned off, head rotations substantially reduced front-back reversals, and the combination of turning on the CI with head rotations led to near-perfect resolution of front-back sound source location. (4) Turning on the CI also improved left-right localization performance. (5) As expected, NH listeners with both ears unplugged localized to the correct front-back and left-right hemifields both with and without head movements. CONCLUSIONS: Although SSD-CI listeners demonstrate a relatively poor ability to distinguish the front-back location of sound sources when their head is stationary, their performance is substantially improved with head movements. Most of this improvement occurs when the CI is off, suggesting that the NH ear does most of the "work" in this regard, though some additional gain is introduced with turning the CI on. During head turns, these listeners appear to primarily rely on comparing changes in head position to changes in monaural level cues produced by the direction-dependent attenuation of high-frequency sounds that result from acoustic head shadowing. In this way, SSD-CI listeners overcome limitations to the reliability of monaural spectral and level cues under stationary conditions. SSD-CI listeners may have learned, through chronic monaural experience before CI implantation, or with the relatively impoverished spatial cues provided by their CI-implanted ear, to exploit the monaural level cue. Unilaterally-plugged NH listeners were also able to use this cue during the experiment to realize approximately the same magnitude of benefit from head turns just minutes after plugging, though their performance was less accurate than that of the SSD-CI listeners, both with and without their CI turned on.


Subject(s)
Cochlear Implantation , Cochlear Implants , Sound Localization , Head Movements , Humans , Reproducibility of Results
9.
J Acoust Soc Am ; 148(2): R3, 2020 08.
Article in English | MEDLINE | ID: mdl-32873046

ABSTRACT

The Reflections series takes a look back on historical articles from The Journal of the Acoustical Society of America that have had a significant impact on the science and practice of acoustics.


Subject(s)
Acoustics
10.
Acoust Sci Technol ; 41(1): 113-120, 2020 Jan.
Article in English | MEDLINE | ID: mdl-34305431

ABSTRACT

A review of data published or presented by the authors from two populations of subjects (normal hearing listeners and patients fit with cochlear implants, CIs) involving research on sound source localization when listeners move is provided. The overall theme of the review is that sound source localization requires an integration of auditory-spatial and head-position cues and is, therefore, a multisystem process. Research with normal hearing listeners includes that related to the Wallach Azimuth Illusion, and additional aspects of sound source localization perception when listeners and sound sources rotate. Research with CI patients involves investigations of sound source localization performance by patients fit with a single CI, bilateral CIs, a CI and a hearing aid (bimodal patients), and single-sided deaf patients with one normal functioning ear and the other ear fit with a CI. Past research involving CI patients who were stationary and more recent data based on CI patients' use of head rotation to localize sound sources is summarized.

11.
J Acoust Soc Am ; 146(4): 2709, 2019 10.
Article in English | MEDLINE | ID: mdl-31671982

ABSTRACT

Thirty-two listeners participated in experiments involving five filtered noises when listeners kept their eyes open or closed, for stimuli of short or long duration, and for stimuli that were presented at random locations or in a largely rotational procession. Individual differences in the proportion of front-back reversals (FBRs) were measured. There were strong positive correlations between the proportion of FBRs for any one filtered noise, but not when FBRs were compared across different filtered-noise conditions. The results suggest that, for each individual listener, the rate of FBRs is stable for any one filtered noise, but not across filtered noises.


Subject(s)
Acoustic Stimulation , Noise , Sound Localization , Adult , Female , Humans , Individuality , Male , Signal Processing, Computer-Assisted , Sound , Young Adult
12.
J Acoust Soc Am ; 146(3): EL219, 2019 09.
Article in English | MEDLINE | ID: mdl-31590525

ABSTRACT

Normal hearing listeners discriminated a change in the number of talkers speaking consonant-vowel pairs between two auditory scenes. The number of talkers (n = 2, 4, 6, or 8) in one scene was incremented by Δn talkers (Δn = 1-8 talkers, depending on n) in the other scene. The perceptual size of the auditory scene seems to be small, as discrimination performance reached an approximate 0.75 proportion correct asymptote for n > 4. The independent variable of overall level differences affected performance, but both spatial configuration and talker similarity had very little effect.


Subject(s)
Speech Perception , Adult , Auditory Threshold , Female , Humans , Male , Sound Localization
13.
J Acoust Soc Am ; 146(1): 382, 2019 07.
Article in English | MEDLINE | ID: mdl-31370595

ABSTRACT

Wallach [J. Exp. Psychol. 27, 339-368 (1940)] described a "2-1" rotation scenario in which a sound source rotates on an azimuth circle around a rotating listener at twice the listener's rate of rotation. In this scenario, listeners often perceive an illusionary stationary sound source, even though the actual sound source is rotating. This Wallach Azimuth Illusion (WAI) was studied to explore Wallach's description of sound-source localization as a required interaction of binaural and head-position cues (i.e., sound-source localization is a multisystem process). The WAI requires front-back reversed sound-source localization. To extend and consolidate the current understanding of the WAI, listeners and sound sources were rotated over large distances and long time periods, which had not been done before. The data demonstrate a strong correlation between measures of the predicted WAI locations and front-back reversals (FBRs). When sounds are unlikely to elicit FBRs, sound sources are perceived veridically as rotating, but the results are listener dependent. Listeners' eyes were always open and there was little evidence under these conditions that changes in vestibular function affected the occurrence of the WAI. The results show that the WAI is a robust phenomenon that should be useful for further exploration of sound-source localization as a multisystem process.


Subject(s)
Auditory Perception/physiology , Illusions/physiology , Sound Localization/physiology , Sound , Acoustic Stimulation/methods , Adult , Female , Humans , Male , Middle Aged , Rotation , Vestibule, Labyrinth/physiology
14.
J Acoust Soc Am ; 145(4): EL310, 2019 04.
Article in English | MEDLINE | ID: mdl-31046327

ABSTRACT

Listeners discriminated changes in the spatial configuration of two-to-eight consonant-vowel (CV) stimuli spoken by different talkers, all simultaneously presented from different loudspeakers in various azimuthal spatial configurations. The number of CVs, spatial configuration of the sound sources, and similarity of the talkers speaking the CVs were varied. Experiment I used a same-different procedure to determine the discriminability of different spatial configurations of multiple sound sources. In experiment II, listeners determined the direction (clockwise or counterclockwise) of sound source rotation over eight rotational steps. In both experiments, performance declined as the number of sound sources increased beyond two.


Subject(s)
Discrimination, Psychological , Sound Localization , Adult , Female , Humans , Male , Phonetics , Speech Acoustics
15.
J Acoust Soc Am ; 144(3): EL236, 2018 09.
Article in English | MEDLINE | ID: mdl-30424669

ABSTRACT

Normal hearing listeners judged loudness differences between two complex speech sounds, one consisting of "n" consonant-vowel (CV) pairs each spoken by a different talker and one consisting of "2n" CV pairs. When n was less than four, listeners' judgments of loudness differences between the two sounds was based on the level of the individual CVs within each sound, not the overall level of the sounds. When n was four or more, listeners' judgments of loudness differences between the two sounds was based on the overall level of the two sounds consisting of n or 2n CVs.


Subject(s)
Acoustic Stimulation/methods , Loudness Perception/physiology , Speech Intelligibility/physiology , Speech Perception/physiology , Adult , Auditory Perception/physiology , Female , Humans , Male , Phonetics , Young Adult
16.
Proc Natl Acad Sci U S A ; 115(16): 3998-4000, 2018 04 17.
Article in English | MEDLINE | ID: mdl-29622682
17.
Ear Hear ; 39(6): 1224-1231, 2018.
Article in English | MEDLINE | ID: mdl-29664750

ABSTRACT

OBJECTIVES: We report on the ability of patients fit with bilateral cochlear implants (CIs) to distinguish the front-back location of sound sources both with and without head movements. At issue was (i) whether CI patients are more prone to front-back confusions than normal hearing listeners for wideband, high-frequency stimuli; and (ii) if CI patients can utilize dynamic binaural difference cues, in tandem with their own head rotation, to resolve these front-back confusions. Front-back confusions offer a binary metric to gain insight into CI patients' ability to localize sound sources under dynamic conditions not generally measured in laboratory settings where both the sound source and patient are static. DESIGN: Three-second duration Gaussian noise samples were bandpass filtered to 2 to 8 kHz and presented from one of six loudspeaker locations located 60° apart, surrounding the listener. Perceived sound source localization for seven listeners bilaterally implanted with CIs, was tested under conditions where the patient faced forward and did not move their head and under conditions where they were encouraged to moderately rotate their head. The same conditions were repeated for 5 of the patients with one implant turned off (the implant at the better ear remained on). A control group of normal hearing listeners was also tested for a baseline of comparison. RESULTS: All seven CI patients demonstrated a high rate of front-back confusions when their head was stationary (41.9%). The proportion of front-back confusions was reduced to 6.7% when these patients were allowed to rotate their head within a range of approximately ± 30°. When only one implant was turned on, listeners' localization acuity suffered greatly. In these conditions, head movement or the lack thereof made little difference to listeners' performance. CONCLUSIONS: Bilateral implantation can offer CI listeners the ability to track dynamic auditory spatial difference cues and compare these changes to changes in their own head position, resulting in a reduced rate of front-back confusions. This suggests that, for these patients, estimates of auditory acuity based solely on static laboratory settings may underestimate their real-world localization abilities.


Subject(s)
Auditory Perception , Cochlear Implants , Head Movements , Sound Localization , Aged , Cues , Female , Hearing , Humans , Male , Middle Aged
18.
J Acoust Soc Am ; 142(1): 173, 2017 07.
Article in English | MEDLINE | ID: mdl-28764438

ABSTRACT

Sound source localization accuracy as measured in an identification procedure in a front azimuth sound field was studied for click trains, modulated noises, and a modulated tonal carrier. Sound source localization accuracy was determined as a function of the number of clicks in a 64 Hz click train and click rate for a 500 ms duration click train. The clicks were either broadband or high-pass filtered. Sound source localization accuracy was also measured for a single broadband filtered click and compared to a similar broadband filtered, short-duration noise. Sound source localization accuracy was determined as a function of sinusoidal amplitude modulation and the "transposed" process of modulation of filtered noises and a 4 kHz tone. Different rates (16 to 512 Hz) of modulation (including unmodulated conditions) were used. Providing modulation for filtered click stimuli, filtered noises, and the 4 kHz tone had, at most, a very small effect on sound source localization accuracy. These data suggest that amplitude modulation, while providing information about interaural time differences in headphone studies, does not have much influence on sound source localization accuracy in a sound field.


Subject(s)
Discrimination, Psychological , Sound Localization , Acoustic Stimulation , Adult , Audiometry , Female , Humans , Judgment , Male , Time Factors , Young Adult
20.
J Acoust Soc Am ; 141(4): 2882, 2017 04.
Article in English | MEDLINE | ID: mdl-28464690

ABSTRACT

If an auditory scene consists of many spatially separated sound sources, how many sound sources can be processed by the auditory system? Experiment I determined how many speech sources could be localized simultaneously on the azimuth plane. Different words were played from multiple loudspeakers, and listeners reported the total number of sound sources and their individual locations. In experiment II the accuracy of localizing one speech source in a mixture of multiple speech sources was determined. An extra sound source was added to an existing set of sound sources, and the task was to localize that extra source. In experiment III the setup and task were the same as in experiment I, except that the sounds were tones. The results showed that the maximum number of sound sources that listeners could perceive was limited to approximately four spatially separated speech signals and three for tonal signals. The localization errors increased along with the increase of total number of sound sources. When four or more speech sources already existed, the accuracy in localizing an additional source was near chance.


Subject(s)
Environment , Environmental Exposure/adverse effects , Noise/adverse effects , Perceptual Masking , Sound Localization , Speech Perception , Acoustic Stimulation , Audiometry, Speech , Auditory Threshold , Female , Humans , Male , Signal Detection, Psychological
SELECTION OF CITATIONS
SEARCH DETAIL
...