Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 23
Filter
Add more filters










Publication year range
1.
Sci Rep ; 14(1): 4457, 2024 02 23.
Article in English | MEDLINE | ID: mdl-38396044

ABSTRACT

Everyday environments often contain multiple concurrent sound sources that fluctuate over time. Normally hearing listeners can benefit from high signal-to-noise ratios (SNRs) in energetic dips of temporally fluctuating background sound, a phenomenon called dip-listening. Specialized mechanisms of dip-listening exist across the entire auditory pathway. Both the instantaneous fluctuating and the long-term overall SNR shape dip-listening. An unresolved issue regarding cortical mechanisms of dip-listening is how target perception remains invariant to overall SNR, specifically, across different tone levels with an ongoing fluctuating masker. Equivalent target detection over both positive and negative overall SNRs (SNR invariance) is reliably achieved in highly-trained listeners. Dip-listening is correlated with the ability to resolve temporal fine structure, which involves temporally-varying spike patterns. Thus the current work tests the hypothesis that at negative SNRs, neuronal readout mechanisms need to increasingly rely on decoding strategies based on temporal spike patterns, as opposed to spike count. Recordings from chronically implanted electrode arrays in core auditory cortex of trained and awake Mongolian gerbils that are engaged in a tone detection task in 10 Hz amplitude-modulated background sound reveal that rate-based decoding is not SNR-invariant, whereas temporal coding is informative at both negative and positive SNRs.


Subject(s)
Speech Perception , Perceptual Masking , Hearing , Sound , Hearing Tests
2.
J Acoust Soc Am ; 151(5): 3116, 2022 05.
Article in English | MEDLINE | ID: mdl-35649891

ABSTRACT

Acoustics research involving human participants typically takes place in specialized laboratory settings. Listening studies, for example, may present controlled sounds using calibrated transducers in sound-attenuating or anechoic chambers. In contrast, remote testing takes place outside of the laboratory in everyday settings (e.g., participants' homes). Remote testing could provide greater access to participants, larger sample sizes, and opportunities to characterize performance in typical listening environments at the cost of reduced control of environmental conditions, less precise calibration, and inconsistency in attentional state and/or response behaviors from relatively smaller sample sizes and unintuitive experimental tasks. The Acoustical Society of America Technical Committee on Psychological and Physiological Acoustics launched the Task Force on Remote Testing (https://tcppasa.org/remotetesting/) in May 2020 with goals of surveying approaches and platforms available to support remote testing and identifying challenges and considerations for prospective investigators. The results of this task force survey were made available online in the form of a set of Wiki pages and summarized in this report. This report outlines the state-of-the-art of remote testing in auditory-related research as of August 2021, which is based on the Wiki and a literature search of papers published in this area since 2020, and provides three case studies to demonstrate feasibility during practice.


Subject(s)
Acoustics , Auditory Perception , Attention/physiology , Humans , Prospective Studies , Sound
3.
Sci Rep ; 11(1): 23540, 2021 12 07.
Article in English | MEDLINE | ID: mdl-34876580

ABSTRACT

Sensory cortical mechanisms combine auditory or visual features into perceived objects. This is difficult in noisy or cluttered environments. Knowing that individuals vary greatly in their susceptibility to clutter, we wondered whether there might be a relation between an individual's auditory and visual susceptibilities to clutter. In auditory masking, background sound makes spoken words unrecognizable. When masking arises due to interference at central auditory processing stages, beyond the cochlea, it is called informational masking. A strikingly similar phenomenon in vision, called visual crowding, occurs when nearby clutter makes a target object unrecognizable, despite being resolved at the retina. We here compare susceptibilities to auditory informational masking and visual crowding in the same participants. Surprisingly, across participants, we find a negative correlation (R = -0.7) between susceptibility to informational masking and crowding: Participants who have low susceptibility to auditory clutter tend to have high susceptibility to visual clutter, and vice versa. This reveals a tradeoff in the brain between auditory and visual processing.


Subject(s)
Auditory Perception/physiology , Vision, Ocular/physiology , Visual Perception/physiology , Adult , Attention/physiology , Brain/physiology , Crowding , Female , Humans , Male , Noise , Perceptual Masking/physiology , Young Adult
4.
Front Neurosci ; 15: 675326, 2021.
Article in English | MEDLINE | ID: mdl-34366772

ABSTRACT

Suppressing unwanted background sound is crucial for aural communication. A particularly disruptive type of background sound, informational masking (IM), often interferes in social settings. However, IM mechanisms are incompletely understood. At present, IM is identified operationally: when a target should be audible, based on suprathreshold target/masker energy ratios, yet cannot be heard because target-like background sound interferes. We here confirm that speech identification thresholds differ dramatically between low- vs. high-IM background sound. However, speech detection thresholds are comparable across the two conditions. Moreover, functional near infrared spectroscopy recordings show that task-evoked blood oxygenation changes near the superior temporal gyrus (STG) covary with behavioral speech detection performance for high-IM but not low-IM background sound, suggesting that the STG is part of an IM-dependent network. Moreover, listeners who are more vulnerable to IM show increased hemodynamic recruitment near STG, an effect that cannot be explained based on differences in task difficulty across low- vs. high-IM. In contrast, task-evoked responses near another auditory region of cortex, the caudal inferior frontal sulcus (cIFS), do not predict behavioral sensitivity, suggesting that the cIFS belongs to an IM-independent network. Results are consistent with the idea that cortical gating shapes individual vulnerability to IM.

6.
Sci Rep ; 11(1): 3955, 2021 02 17.
Article in English | MEDLINE | ID: mdl-33597563

ABSTRACT

An increasing number of studies show that listeners often have difficulty hearing in situations with background noise, despite normal tuning curves in quiet. One potential source of this difficulty could be sensorineural changes in the auditory periphery (the ear). Signal in noise detection deficits also arise in animals raised with developmental conductive hearing loss (CHL), a manipulation that induces acoustic attenuation to model how sound deprivation changes the central auditory system. This model attributes perceptual deficits to central changes by assuming that CHL does not affect sensorineural elements in the periphery that could raise masked thresholds. However, because of efferent feedback, altering the auditory system could affect cochlear elements. Indeed, recent studies show that adult-onset CHL can cause cochlear synapse loss, potentially calling into question the assumption of an intact periphery in early-onset CHL. To resolve this issue, we tested the long-term peripheral effects of CHL via developmental bilateral malleus displacement. Using forward masking tuning curves, we compared peripheral tuning in animals raised with CHL vs age-matched controls. Using compound action potential measurements from the round window, we assessed inner hair cell synapse integrity. Results indicate that developmental CHL can cause minor synaptopathy. However, developmental CHL does not appreciably alter peripheral frequency tuning.


Subject(s)
Cochlea/physiology , Hearing Loss, Conductive/physiopathology , Hearing/physiology , Acoustic Stimulation , Animals , Auditory Perception/physiology , Auditory Threshold/physiology , Cochlea/metabolism , Evoked Potentials, Auditory, Brain Stem/physiology , Female , Gerbillinae , Hair Cells, Auditory, Inner/physiology , Male , Models, Animal , Noise
7.
Elife ; 82019 10 21.
Article in English | MEDLINE | ID: mdl-31633481

ABSTRACT

Human sound localization is an important computation performed by the brain. Models of sound localization commonly assume that sound lateralization from interaural time differences is level invariant. Here we observe that two prevalent theories of sound localization make opposing predictions. The labelled-line model encodes location through tuned representations of spatial location and predicts that perceived direction is level invariant. In contrast, the hemispheric-difference model encodes location through spike-rate and predicts that perceived direction becomes medially biased at low sound levels. Here, behavioral experiments find that softer sounds are perceived closer to midline than louder sounds, favoring rate-coding models of human sound localization. Analogously, visual depth perception, which is based on interocular disparity, depends on the contrast of the target. The similar results in hearing and vision suggest that the brain may use a canonical computation of location: encoding perceived location through population spike rate relative to baseline.


Subject(s)
Brain/physiology , Physical Phenomena , Sound Localization , Sound , Adolescent , Adult , Female , Humans , Male , Models, Neurological , Young Adult
8.
Trends Hear ; 22: 2331216518817464, 2018.
Article in English | MEDLINE | ID: mdl-30558491

ABSTRACT

Informational masking (IM) can greatly reduce speech intelligibility, but the neural mechanisms underlying IM are not understood. Binaural differences between target and masker can improve speech perception. In general, improvement in masked speech intelligibility due to provision of spatial cues is called spatial release from masking. Here, we focused on an aspect of spatial release from masking, specifically, the role of spatial attention. We hypothesized that in a situation with IM background sound (a) attention to speech recruits lateral frontal cortex (LFCx) and (b) LFCx activity varies with direction of spatial attention. Using functional near infrared spectroscopy, we assessed LFCx activity bilaterally in normal-hearing listeners. In Experiment 1, two talkers were simultaneously presented. Listeners either attended to the target talker (speech task) or they listened passively to an unintelligible, scrambled version of the acoustic mixture (control task). Target and masker differed in pitch and interaural time difference (ITD). Relative to the passive control, LFCx activity increased during attentive listening. Experiment 2 measured how LFCx activity varied with ITD, by testing listeners on the speech task in Experiment 1, except that talkers either were spatially separated by ITD or colocated. Results show that directing of auditory attention activates LFCx bilaterally. Moreover, right LFCx is recruited more strongly in the spatially separated as compared with colocated configurations. Findings hint that LFCx function contributes to spatial release from masking in situations with IM.


Subject(s)
Auditory Threshold , Perceptual Masking , Speech Perception , Adult , Attention , Auditory Perception/physiology , Auditory Threshold/physiology , Comprehension , Cues , Female , Hearing Tests , Humans , Male , Perceptual Masking/physiology , Sound , Spectroscopy, Near-Infrared , Speech Intelligibility , Speech Perception/physiology , Young Adult
9.
Article in English | MEDLINE | ID: mdl-31595139

ABSTRACT

Studies increasingly show that behavioral relevance alters the population representation of sensory stimuli in the sensory cortices. However, the mechanisms underlying this behavior are incompletely understood. Here, we record neuronal responses in the auditory cortex while a highly trained, awake, normal-hearing gerbil listens passively to target tones of high versus low behavioral relevance. Using an information theoretic framework, we model the overall transmission chain from acoustic input stimulus to recorded cortical response as a communication channel. To quantify how much information core auditory cortex carries about high versus low relevance sound, we then compute the mutual information of the multi-unit neuronal responses. Results show that the output over the stimulus-to-response channel can be modeled as a Poisson mixture. We derive a closed-form fast approximation for the entropy of a mixture of univariate Poisson random variables. A purely rate-code based model reveals reduced information transfer for high relevance compared to low relevance tones, hinting that changes in temporal discharge pattern may encode behavioral relevance.

10.
Trends Hear ; 20: 2331216516676255, 2016.
Article in English | MEDLINE | ID: mdl-28215119

ABSTRACT

Hearing-impaired individuals experience difficulties in detecting or understanding speech, especially in background sounds within the same frequency range. However, normally hearing (NH) human listeners experience less difficulty detecting a target tone in background noise when the envelope of that noise is temporally gated (modulated) than when that envelope is flat across time (unmodulated). This perceptual benefit is called modulation masking release (MMR). When flanking masker energy is added well outside the frequency band of the target, and comodulated with the original modulated masker, detection thresholds improve further (MMR+). In contrast, if the flanking masker is antimodulated with the original masker, thresholds worsen (MMR-). These interactions across disparate frequency ranges are thought to require central nervous system (CNS) processing. Therefore, we explored the effect of developmental conductive hearing loss (CHL) in gerbils on MMR characteristics, as a test for putative CNS mechanisms. The detection thresholds of NH gerbils were lower in modulated noise, when compared with unmodulated noise. The addition of a comodulated flanker further improved performance, whereas an antimodulated flanker worsened performance. However, for CHL-reared gerbils, all three forms of masking release were reduced when compared with NH animals. These results suggest that developmental CHL impairs both within- and across-frequency processing and provide behavioral evidence that CNS mechanisms are affected by a peripheral hearing impairment.


Subject(s)
Auditory Threshold , Hearing Loss, Conductive , Perceptual Masking , Hearing , Humans , Noise
11.
J Assoc Res Otolaryngol ; 16(5): 641-52, 2015 Oct.
Article in English | MEDLINE | ID: mdl-26105749

ABSTRACT

Monaural rate discrimination and binaural interaural time difference (ITD) discrimination were studied as functions of pulse rate in a group of bilaterally implanted cochlear implant users. Stimuli for the rate discrimination task were pulse trains presented to one electrode, which could be in the apical, middle, or basal part of the array, and in either the left or the right ear. In each two-interval trial, the standard stimulus had a rate of 100, 200, 300, or 500 pulses per second and the signal stimulus had a rate 35% higher. ITD discrimination between pitch-matched electrode pairs was measured for the same standard rates as in the rate discrimination task and with an ITD of +/- 500 µs. Sensitivity (d') on both tasks decreased with increasing rate, as has been reported previously. This study tested the hypothesis that deterioration in performance at high rates occurs for the two tasks due to a common neural basis, specific to the stimulation of each electrode. Results show that ITD scores for different pairs of electrodes correlated with the lower rate discrimination scores for those two electrodes. Statistical analysis, which partialed out overall differences between listeners, electrodes, and rates, supports the hypothesis that monaural and binaural temporal processing limitations are at least partly due to a common mechanism.


Subject(s)
Cochlear Implants , Loudness Perception , Pitch Discrimination , Adult , Aged , Humans , Middle Aged
12.
Front Syst Neurosci ; 8: 22, 2014.
Article in English | MEDLINE | ID: mdl-24653681

ABSTRACT

The current study examined how cochlear implant (CI) listeners combine temporally interleaved envelope-ITD information across two sites of stimulation. When two cochlear sites jointly transmit ITD information, one possibility is that CI listeners can extract the most reliable ITD cues available. As a result, ITD sensitivity would be sustained or enhanced compared to single-site stimulation. Alternatively, mutual interference across multiple sites of ITD stimulation could worsen dual-site performance compared to listening to the better of two electrode pairs. Two experiments used direct stimulation to examine how CI users can integrate ITDs across two pairs of electrodes. Experiment 1 tested ITD discrimination for two stimulation sites using 100-Hz sinusoidally modulated 1000-pps-carrier pulse trains. Experiment 2 used the same stimuli ramped with 100 ms windows, as a control condition with minimized onset cues. For all stimuli, performance improved monotonically with increasing modulation depth. Results show that when CI listeners are stimulated with electrode pairs at two cochlear sites, sensitivity to ITDs was similar to that seen when only the electrode pair with better sensitivity was activated. None of the listeners showed a decrement in performance from the worse electrode pair. This could be achieved either by listening to the better electrode pair or by truly integrating the information across cochlear sites.

13.
J Assoc Res Otolaryngol ; 15(2): 265-78, 2014 Apr.
Article in English | MEDLINE | ID: mdl-24448721

ABSTRACT

A cochlear implant (CI) presents band-pass-filtered acoustic envelope information by modulating current pulse train levels. Similarly, a vocoder presents envelope information by modulating an acoustic carrier. By studying how normal hearing (NH) listeners are able to understand degraded speech signals with a vocoder, the parameters that best simulate electric hearing and factors that might contribute to the NH-CI performance difference may be better understood. A vocoder with harmonic complex carriers (fundamental frequency, f0 = 100 Hz) was used to study the effect of carrier phase dispersion on speech envelopes and intelligibility. The starting phases of the harmonic components were randomly dispersed to varying degrees prior to carrier filtering and modulation. NH listeners were tested on recognition of a closed set of vocoded words in background noise. Two sets of synthesis filters simulated different amounts of current spread in CIs. Results showed that the speech vocoded with carriers whose starting phases were maximally dispersed was the most intelligible. Superior speech understanding may have been a result of the flattening of the dispersed-phase carrier's intrinsic temporal envelopes produced by the large number of interacting components in the high-frequency channels. Cross-correlogram analyses of auditory nerve model simulations confirmed that randomly dispersing the carrier's component starting phases resulted in better neural envelope representation. However, neural metrics extracted from these analyses were not found to accurately predict speech recognition scores for all vocoded speech conditions. It is possible that central speech understanding mechanisms are insensitive to the envelope-fine structure dichotomy exploited by vocoders.


Subject(s)
Cochlear Implants , Noise , Speech Perception , Cochlear Nerve/physiology , Humans , Psychoacoustics
14.
PLoS One ; 7(9): e45296, 2012.
Article in English | MEDLINE | ID: mdl-23028914

ABSTRACT

Spatial release from masking refers to a benefit for speech understanding. It occurs when a target talker and a masker talker are spatially separated. In those cases, speech intelligibility for target speech is typically higher than when both talkers are at the same location. In cochlear implant listeners, spatial release from masking is much reduced or absent compared with normal hearing listeners. Perhaps this reduced spatial release occurs because cochlear implant listeners cannot effectively attend to spatial cues. Three experiments examined factors that may interfere with deploying spatial attention to a target talker masked by another talker. To simulate cochlear implant listening, stimuli were vocoded with two unique features. First, we used 50-Hz low-pass filtered speech envelopes and noise carriers, strongly reducing the possibility of temporal pitch cues; second, co-modulation was imposed on target and masker utterances to enhance perceptual fusion between the two sources. Stimuli were presented over headphones. Experiments 1 and 2 presented high-fidelity spatial cues with unprocessed and vocoded speech. Experiment 3 maintained faithful long-term average interaural level differences but presented scrambled interaural time differences with vocoded speech. Results show a robust spatial release from masking in Experiments 1 and 2, and a greatly reduced spatial release in Experiment 3. Faithful long-term average interaural level differences were insufficient for producing spatial release from masking. This suggests that appropriate interaural time differences are necessary for restoring spatial release from masking, at least for a situation where there are few viable alternative segregation cues.


Subject(s)
Auditory Threshold/physiology , Perceptual Masking/physiology , Speech Intelligibility/physiology , Speech Perception/physiology , Acoustic Stimulation , Adolescent , Attention , Cochlear Implantation , Cochlear Implants , Cues , Female , Humans , Models, Theoretical , Sound Localization , Speech , Young Adult
15.
J Acoust Soc Am ; 131(2): 1315-24, 2012 Feb.
Article in English | MEDLINE | ID: mdl-22352505

ABSTRACT

For normal-hearing (NH) listeners, masker energy outside the spectral region of a target signal can improve target detection and identification, a phenomenon referred to as comodulation masking release (CMR). This study examined whether, for cochlear implant (CI) listeners and for NH listeners presented with a "noise vocoded" CI simulation, speech identification in modulated noise is improved by a co-modulated flanking band. In Experiment 1, NH listeners identified noise-vocoded speech in a background of on-target noise with or without a flanking narrow band of noise outside the spectral region of the target. The on-target noise and flanker were either 16-Hz square-wave modulated with the same phase or were unmodulated; the speech was taken from a closed-set corpus. Performance was better in modulated than in unmodulated noise, and this difference was slightly greater when the comodulated flanker was present, consistent with a small CMR of about 1.7 dB for noise-vocoded speech. Experiment 2, which tested CI listeners using the same speech materials, found no advantage for modulated versus unmodulated maskers and no CMR. Thus although NH listeners can benefit from CMR even for speech signals with reduced spectro-temporal detail, no CMR was observed for CI users.


Subject(s)
Cochlear Implants , Hearing Loss, Sensorineural/physiopathology , Noise , Perceptual Masking/physiology , Speech Perception/physiology , Acoustic Stimulation , Adolescent , Analysis of Variance , Auditory Threshold/physiology , Humans , Loudness Perception/physiology , Young Adult
16.
J Acoust Soc Am ; 130(1): 324-33, 2011 Jul.
Article in English | MEDLINE | ID: mdl-21786902

ABSTRACT

Two experiments explored how frequency content impacts sound localization for sounds containing reverberant energy. Virtual sound sources from thirteen lateral angles and four distances were simulated in the frontal horizontal plane using binaural room impulse responses measured in an everyday office. Experiment 1 compared localization judgments for one-octave-wide noise centered at either 750 Hz (low) or 6000 Hz (high). For both band-limited noises, perceived lateral angle varied monotonically with source angle. For frontal sources, perceived locations were similar for low- and high-frequency noise; however, for lateral sources, localization was less accurate for low-frequency noise than for high-frequency noise. With increasing source distance, judgments of both noises became more biased toward the median plane, an effect that was greater for low-frequency noise than for high-frequency noise. In Experiment 2, simultaneous presentation of low- and high-frequency noises yielded performance that was less accurate than that for high-frequency noise, but equal to or better than for low-frequency noise. Results suggest that listeners perceptually weight low-frequency information heavily, even in reverberant conditions where high-frequency stimuli are localized more accurately. These findings show that listeners do not always optimally adjust how localization cues are integrated over frequency in reverberant settings.


Subject(s)
Facility Design and Construction , Sound Localization , Sound , Acoustic Stimulation , Analysis of Variance , Audiometry , Auditory Threshold , Cues , Humans , Vibration
17.
J Acoust Soc Am ; 128(2): 870-80, 2010 Aug.
Article in English | MEDLINE | ID: mdl-20707456

ABSTRACT

Experiment 1 replicated the finding that normal-hearing listeners identify speech better in modulated than in unmodulated noise. This modulated-unmodulated difference ("MUD") has been previously shown to be reduced or absent for cochlear-implant listeners and for normal-hearing listeners presented with noise-vocoded speech. Experiments 2-3 presented normal-hearing listeners with noise-vocoded speech in unmodulated or 16-Hz-square-wave modulated noise, and investigated whether the introduction of simple binaural differences between target and masker could restore the masking release. Stimuli were presented over headphones. When the target and masker were presented to one ear, adding a copy of the masker to the other ear ("diotic configuration") aided performance but did so to a similar degree for modulated and unmodulated maskers, thereby failing to improve the modulation masking release. Presenting an uncorrelated noise to the opposite ear ("dichotic configuration") had no effect, either for modulated or unmodulated maskers, consistent with the improved performance in the diotic configuration being due to interaural decorrelation processing. For noise-vocoded speech, the provision of simple spatial differences did not allow listeners to take greater advantage of the dips present in a modulated masker.


Subject(s)
Cochlear Implants , Noise , Perceptual Masking , Speech Perception , Acoustic Stimulation , Adolescent , Adult , Audiometry, Pure-Tone , Dichotic Listening Tests , Humans , Male , Recognition, Psychology , Signal Detection, Psychological , Young Adult
18.
J Acoust Soc Am ; 126(6): EL196-201, 2009 Dec.
Article in English | MEDLINE | ID: mdl-20000894

ABSTRACT

A form of processed speech is described that is highly discriminable in a closed-set identification format. The processing renders speech into a set of sinusoidal pulses played synchronously across frequency. The processing and results from several experiments are described. The number and width of frequency analysis channels and tone-pulse duration were variables. In one condition, various proportions of the tones were randomly removed. The processed speech was remarkably resilient to these manipulations. This type of speech may be useful for examining multitalker listening situations in which a high degree of stimulus control is required.


Subject(s)
Speech Perception , Speech , Acoustic Stimulation , Adult , Female , Humans , Male , Psychoacoustics , Speech Acoustics , Young Adult
19.
Neuron ; 62(1): 123-34, 2009 Apr 16.
Article in English | MEDLINE | ID: mdl-19376072

ABSTRACT

In reverberant environments, acoustic reflections interfere with the direct sound arriving at a listener's ears, distorting the spatial cues for sound localization. Yet, human listeners have little difficulty localizing sounds in most settings. Because reverberant energy builds up over time, the source location is represented relatively faithfully during the early portion of a sound, but this representation becomes increasingly degraded later in the stimulus. We show that the directional sensitivity of single neurons in the auditory midbrain of anesthetized cats follows a similar time course, although onset dominance in temporal response patterns results in more robust directional sensitivity than expected, suggesting a simple mechanism for improving directional sensitivity in reverberation. In parallel behavioral experiments, we demonstrate that human lateralization judgments are consistent with predictions from a population rate model decoding the observed midbrain responses, suggesting a subcortical origin for robust sound localization in reverberant environments.


Subject(s)
Cues , Mesencephalon/physiology , Sound Localization/physiology , Space Perception/physiology , Acoustic Stimulation/methods , Action Potentials/physiology , Animals , Auditory Threshold/physiology , Cats , Humans , Mesencephalon/cytology , Models, Biological , Neurons/physiology , Psychoacoustics , Sound , User-Computer Interface
20.
J Acoust Soc Am ; 124(4): 2224-35, 2008 Oct.
Article in English | MEDLINE | ID: mdl-19062861

ABSTRACT

When competing sources come from different directions, a desired target is easier to hear than when the sources are co-located. How much of this improvement is the result of spatial attention rather than improved perceptual segregation of the competing sources is not well understood. Here, listeners' attention was directed to spatial or nonspatial cues when they listened for a target masked by a competing message. A preceding cue signaled the target timbre, location, or both timbre and location. Spatial separation improved performance when the cue indicated the target location, or both the location and timbre, but not when the cue only indicated the target timbre. However, response errors were influenced by spatial configuration in all conditions. Both attention and streaming contributed to spatial effects when listeners actively attended to location. In contrast, when attention was directed to a nonspatial cue, spatial separation primarily appeared to improve the streaming of auditory objects across time. Thus, when attention is focused on location, spatial separation appears to improve both object selection and object formation; when attention is directed to nonspatial cues, separation affects object formation. These results highlight the need to distinguish between these separate mechanisms when considering how observers cope with complex auditory scenes.


Subject(s)
Attention , Auditory Perception , Cues , Perceptual Masking , Signal Detection, Psychological , Space Perception , Acoustic Stimulation , Adult , Humans , Models, Biological , Time Factors , Young Adult
SELECTION OF CITATIONS
SEARCH DETAIL
...