Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 30
Filter
Add more filters










Publication year range
1.
Hum Factors ; : 187208221139744, 2022 Dec 01.
Article in English | MEDLINE | ID: mdl-36455164

ABSTRACT

OBJECTIVE: The present study was designed to evaluate human performance and workload associated with an auditory vigilance task that required spatial discrimination of auditory stimuli. BACKGROUND: Spatial auditory displays have been increasingly developed and implemented into settings that require vigilance toward auditory spatial discrimination and localization (e.g., collision avoidance warnings). Research has yet to determine whether a vigilance decrement could impede performance in such applications. METHOD: Participants completed a 40-minute auditory vigilance task in either a spatial discrimination condition or a temporal discrimination condition. In the spatial discrimination condition, participants differentiated sounds based on differences in spatial location. In the temporal discrimination condition, participants differentiated sounds based on differences in stimulus duration. RESULTS: Correct detections and false alarms declined during the vigilance task, and each did so at a similar rate in both conditions. The overall level of correct detections did not differ significantly between conditions, but false alarms occurred more frequently within the spatial discrimination condition than in the temporal discrimination condition. NASA-TLX ratings and pupil diameter measurements indicated no differences in workload. CONCLUSION: Results indicated that tasks requiring auditory spatial discrimination can induce a vigilance decrement; and they may result in inferior vigilance performance, compared to tasks requiring discrimination of auditory duration. APPLICATION: Vigilance decrements may impede performance and safety in settings that depend on sustained attention to spatial auditory displays. Display designers should also be aware that auditory displays that require users to discriminate differences in spatial location may result in poorer discrimination performance than non-spatial displays.

2.
Neuroimage ; 199: 512-520, 2019 10 01.
Article in English | MEDLINE | ID: mdl-31129305

ABSTRACT

Recent studies show that pre-stimulus band-specific power and phase in the electroencephalogram (EEG) can predict accuracy on tasks involving the detection of near-threshold stimuli. However, results in the auditory modality have been mixed, and few works have examined pre-stimulus features when more complex decisions are made (e.g. identifying supra-threshold sounds). Further, most auditory studies have used background sounds known to induce oscillatory EEG states, leaving it unclear whether phase predicts accuracy without such background sounds. To address this gap in knowledge, the present study examined pre-stimulus EEG as it relates to accuracy in a tone pattern identification task. On each trial, participants heard a triad of 40-ms sinusoidal tones (separated by 40-ms intervals), one of which was at a different frequency than the other two. Participants' task was to indicate the tone pattern (low-low-high, low-high-low, etc.). No background sounds were employed. Using a phase opposition measure based on inter-trial phase consistencies, pre-stimulus 7-10 Hz phase was found to differ between correct and incorrect trials ∼200 to 100 ms prior to tone-pattern onset. After sorting trials into bins based on phase, accuracy was found to be lowest at around π-+ relative to individuals' most accurate phase bin. No significant effects were found for pre-stimulus power. In the context of the literature, findings suggest an important relationship between the complexity of task demands and pre-stimulus activity within the auditory domain. Results also raise interesting questions about the role of induced oscillatory states or rhythmic processing modes in obtaining pre-stimulus effects of phase in auditory tasks.


Subject(s)
Auditory Perception/physiology , Brain Waves/physiology , Cerebral Cortex/physiology , Electroencephalography Phase Synchronization/physiology , Neuroimaging/methods , Pattern Recognition, Physiological/physiology , Psychomotor Performance/physiology , Adult , Female , Humans , Male , Young Adult
3.
Brain Cogn ; 129: 49-58, 2019 02.
Article in English | MEDLINE | ID: mdl-30554734

ABSTRACT

Recent research has focused on measuring neural correlates of metacognitive judgments in decision and post-decision processes during memory retrieval and categorization. However, many tasks (e.g., stimulus detection) may require monitoring of earlier sensory processing. Here, participants indicated which of two intervals contained an 80-ms pure tone embedded in white noise. One frequency (e.g., 1000 Hz) was presented on ∼80% of all trials (i.e., 'primary' trials). Another frequency (e.g., 2500 Hz) was presented on ∼20% of trials (i.e., 'probe' trials). The event-related potential (ERP) was used to investigate the processing stages related to confidence. Tone-locked N1, P2, and P3 amplitudes were larger for trials rated with high than low confidence. Interestingly, a P3-like late positivity for the tone-absent interval showed high amplitude for low confidence. No 'primary' vs. 'probe' differences were found. However, confidence rating differences between primary and probe trials were correlated with N1 and tone-present P3 amplitude differences. We suggest that metacognitive judgments can track both sensory- and decision-related processes (indexed by the N1 and P3, respectively). The particular processes on which confidence judgments are based likely depend upon the task an individual is faced with and the information at hand (e.g., presence or absence of a signal).


Subject(s)
Cognition , Decision Making , Event-Related Potentials, P300/physiology , Evoked Potentials, Auditory/physiology , Adult , Electroencephalography , Evoked Potentials/physiology , Female , Humans , Male , Metacognition , Reaction Time , Self Concept , Task Performance and Analysis , Young Adult
4.
Hear Res ; 358: 37-41, 2018 Feb.
Article in English | MEDLINE | ID: mdl-29249546

ABSTRACT

Recent studies demonstrate that frontal midline theta power (4-8 Hz) enhancements in the electroencephalogram (EEG) relate to effortful listening. It has been proposed that these enhancements reflect working memory demands. Here, the need to retain auditory information in working memory was manipulated in a 2-interval 2-alternative forced-choice delayed pitch discrimination task ("Which interval contained the higher pitch?"). On each trial, two square wave stimuli differing in pitch at an individual's ∼70.7% correct threshold were separated by a 3-second ISI. In a 'Roving' condition, the lowest pitch stimulus was randomly selected on each trial (uniform distribution from 840 to 1160 Hz). In a 'Fixed' condition, the lowest pitch was always 979 Hz. Critically, the 'Fixed' condition allowed one to know the correct response immediately following the first stimulus (e.g., if the first stimulus is 979 Hz, the second must be higher). In contrast, the 'Roving' condition required retention of the first tone for comparison to the second. Frontal midline theta enhancements during the ISI were only observed for the 'Roving' condition. Alpha (8-13 Hz) enhancements were apparent during the ISI, but did not differ significantly between conditions. Since conditions were matched for accuracy at threshold, results suggest that frontal midline theta enhancements will not always accompany difficult listening. Mixed results in the literature regarding frontal midline theta enhancements may be related to differences between tasks in regards to working memory demands. Alpha enhancements may reflect task general effortful listening processes.

6.
Top Cogn Sci ; 8(1): 291-304, 2016 Jan.
Article in English | MEDLINE | ID: mdl-26748483

ABSTRACT

An important application of cognitive architectures is to provide human performance models that capture psychological mechanisms in a form that can be "programmed" to predict task performance of human-machine system designs. Although many aspects of human performance have been successfully modeled in this approach, accounting for multitalker speech task performance is a novel problem. This article presents a model for performance in a two-talker task that incorporates concepts from psychoacoustics, in particular, masking effects and stream formation.


Subject(s)
Auditory Perception/physiology , Cognition/physiology , Models, Psychological , Psychoacoustics , Speech Perception/physiology , Speech/physiology , Acoustic Stimulation/methods , Attention/physiology , Female , Humans , Male , Perceptual Masking
7.
J Acoust Soc Am ; 140(6): EL539, 2016 Dec.
Article in English | MEDLINE | ID: mdl-28040012

ABSTRACT

This study examined event-related potential (ERP) correlates of auditory spatial benefits gained from rendering sounds with individualized head-related transfer functions (HRTFs). Noise bursts with identical virtual elevations (0°-90°) were presented back-to-back in 5-10 burst "runs" in a roving oddball paradigm. Detection of a run's start (i.e., elevation change detection) was enhanced when bursts were rendered with an individualized compared to a non-individualized HRTF. ERPs showed increased P3 amplitudes to first bursts of a run in the individualized HRTF condition. Condition differences in P3 amplitudes and behavior were positively correlated. Data suggests that part of the individualization benefit reflects post-sensory processes.


Subject(s)
Evoked Potentials , Head , Noise , Sound , Sound Localization
8.
J Acoust Soc Am ; 138(3): 1297-304, 2015 Sep.
Article in English | MEDLINE | ID: mdl-26428768

ABSTRACT

Speech recognition was measured as a function of the target-to-masker ratio (TMR) with syntactically similar speech maskers. In the first experiment, listeners were instructed to report keywords from the target sentence. Data averaged across listeners showed a plateau in performance below 0 dB TMR when masker and target sentences were from the same talker. In this experiment, some listeners tended to report the target words at all TMRs in accordance with the instructions, while others reported keywords from the louder of the sentences, contrary to the instructions. In the second experiment, stimuli were the same as in the first experiment, but listeners were also instructed to avoid reporting the masker keywords, and a payoff matrix penalizing masker keywords and rewarding target keywords was used. In this experiment, listeners reduced the number of reported masker keywords, and increased the number of reported target keywords overall, and the average data showed a local minimum at 0 dB TMR with same-talker maskers. The best overall performance with a same-talker masker was obtained with a level difference of 9 dB, where listeners achieved near perfect performance when the target was louder, and at least 80% correct performance when the target was the quieter of the two sentences.


Subject(s)
Noise/adverse effects , Perceptual Masking , Speech Intelligibility , Speech Perception , Acoustic Stimulation , Adult , Audiometry, Speech , Comprehension , Female , Humans , Male , Recognition, Psychology , Sex Factors , Young Adult
9.
Front Neurosci ; 8: 370, 2014.
Article in English | MEDLINE | ID: mdl-25520607

ABSTRACT

It is widely acknowledged that individualized head-related transfer function (HRTF) measurements are needed to adequately capture all of the 3D spatial hearing cues. However, many perceptual studies have shown that localization accuracy in the lateral dimension is only minimally decreased by the use of non-individualized head-related transfer functions. This evidence supports the idea that the individualized components of an HRTF could be isolated from those that are more general in nature. In the present study we decomposed the HRTF at each location into average, lateral and intraconic spectral components, along with an ITD in an effort to isolate the sound localization cues that are responsible for the inter-individual differences in localization performance. HRTFs for a given listener were then reconstructed systematically with components that were both individualized and non-individualized in nature, and the effect of each modification was analyzed via a virtual localization test where brief 250 ms noise bursts were rendered with the modified HRTFs. Results indicate that the cues important for individualization of HRTFs are contained almost exclusively in the intraconic portion of the HRTF spectra and localization is only minimally affected by introducing non-individualized cues into the other HRTF components. These results provide new insights into what specific inter-individual differences in head-related acoustical features are most relevant to sound localization, and provide a framework for how future human-machine interfaces might be more effectively generalized and/or individualized.

10.
J Acoust Soc Am ; 128(5): 2998-10, 2010 Nov.
Article in English | MEDLINE | ID: mdl-21110595

ABSTRACT

In many multitalker listening tasks, the degradation in performance that occurs when the number of interfering talkers increases from one to two is much larger than would be predicted from the corresponding decrease in the signal-to-noise ratio (SNR). In this experiment, a variety of contextually-relevant speech maskers, contextually-irrelevant speech maskers and non-speech maskers were used to examine the impact that the characteristics of the interfering sound sources have on the magnitude of this "multimasker penalty." The results show that a significant multimasker penalty only occurred in cases where two specific conditions were met: 1) the stimulus contained at least one contextually-relevant masker that could be confused with the target; and 2) the signal-to-noise ratio of the target relative to the combined masker stimulus was less than 0 dB. Remarkably, in cases where one masker was contextually relevant, the specific characteristics of the second masker had virtually no impact on the size of the multimasker penalty. Indeed, when the results were corrected for random guessing, there was essentially no difference in performance between conditions with three contextually-relevant talkers and those with two contextually-relevant talkers and one irrelevant talker. The results of a second experiment suggest that the listeners are generally able to hear keywords spoken by all three talkers even in situations where the multimasker penalty occurs, implying that the primary cause of the penalty is a degradation in the listener's ability to use prosodic cues and voice characteristics to link together words spoken at different points in the target phrase.


Subject(s)
Dichotic Listening Tests , Models, Neurological , Perceptual Masking/physiology , Speech Perception/physiology , Acoustic Stimulation/methods , Adult , Cues , Discrimination, Psychological/physiology , Female , Humans , Male , Middle Aged , Young Adult
11.
J Acoust Soc Am ; 126(6): 3199-208, 2009 Dec.
Article in English | MEDLINE | ID: mdl-20000933

ABSTRACT

Although high-frequency content is known to be critically important for the accurate location of isolated sounds, relatively little is known about the importance of high-frequency spectral content for the localization of sounds in the presence of a masker. In this experiment, listeners were asked to identify the location of a pulsed-noise target in the presence of a randomly located continuous noise masker. Both the target and masker were low-pass filtered at one of eight cutoff frequencies ranging from 1 to 16 kHz, and the signal-to-noise ratio was varied from -12 to +12 dB. The results confirm the importance of high frequencies for the localization of isolated sounds, and show that high-frequency content remains critical in cases where the target sound is masked by a spatially separated masker. In fact, when two sources of the same level are randomly located in space, these results show that a decrease in stimulus bandwidth from 16 to 12 kHz might result in a 30% increase in overall localization error.


Subject(s)
Noise , Perceptual Masking , Sound Localization , Acoustic Stimulation , Analysis of Variance , Ear , Female , Humans , Male , Psychoacoustics , Signal Detection, Psychological , Sound Spectrography , Task Performance and Analysis , Young Adult
12.
J Acoust Soc Am ; 125(6): 4006-22, 2009 Jun.
Article in English | MEDLINE | ID: mdl-19507982

ABSTRACT

When a target voice is masked by an increasingly similar masker voice, increases in energetic masking are likely to occur due to increased spectro-temporal overlap in the competing speech waveforms. However, the impact of this increase may be obscured by informational masking effects related to the increased confusability of the target and masking utterances. In this study, the effects of target-masker similarity and the number of competing talkers on the energetic component of speech-on-speech masking were measured with an ideal time-frequency segregation (ITFS) technique that retained all the target-dominated time-frequency regions of a multitalker mixture but eliminated all the time-frequency regions dominated by the maskers. The results show that target-masker similarity has a small but systematic impact on energetic masking, with roughly a 1 dB release from masking for same-sex maskers versus same-talker maskers and roughly an additional 1 dB release from masking for different-sex masking voices. The results of a second experiment measuring ITFS performance with up to 18 interfering talkers indicate that energetic masking increased systematically with the number of competing talkers. These results suggest that energetic masking differences related to target-masker similarity have a much smaller impact on multitalker listening performance than energetic masking effects related to the number of competing talkers in the stimulus and non-energetic masking effects related to the confusability of the target and masking voices.


Subject(s)
Perceptual Masking , Speech Perception , Acoustic Stimulation , Adolescent , Adult , Analysis of Variance , Discrimination, Psychological , Female , Humans , Male , Middle Aged , Neuropsychological Tests , Psychoacoustics , Sex Characteristics , Speech , Task Performance and Analysis , Time Factors , Young Adult
13.
J Acoust Soc Am ; 122(3): 1693, 2007 Sep.
Article in English | MEDLINE | ID: mdl-17927429

ABSTRACT

When listeners hear a target signal in the presence of competing sounds, they are quite good at extracting information at instances when the local signal-to-noise ratio of the target is most favorable. Previous research suggests that listeners can easily understand a periodically interrupted target when it is interleaved with noise. It is not clear if this ability extends to the case where an interrupted target is alternated with a speech masker rather than noise. This study examined speech intelligibility in the presence of noise or speech maskers, which were either continuous or interrupted at one of six rates between 4 and 128 Hz. Results indicated that with noise maskers, listeners performed significantly better with interrupted, rather than continuous maskers. With speech maskers, however, performance was better in continuous, rather than interrupted masker conditions. Presumably the listeners used continuity as a cue to distinguish the continuous masker from the interrupted target. Intelligibility in the interrupted masker condition was improved by introducing a pitch difference between the target and speech masker. These results highlight the role that target-masker differences in continuity and pitch play in the segregation of competing speech signals.


Subject(s)
Auditory Threshold/physiology , Perceptual Masking , Speech Intelligibility , Speech Perception/physiology , Acoustic Stimulation , Adult , Audiometry , Humans , Middle Aged , Noise , Phonetics , Psychoacoustics
14.
J Acoust Soc Am ; 122(3): 1724, 2007 Sep.
Article in English | MEDLINE | ID: mdl-17927432

ABSTRACT

Similarity between the target and masking voices is known to have a strong influence on performance in monaural and binaural selective attention tasks, but little is known about the role it might play in dichotic listening tasks with a target signal and one masking voice in the one ear and a second independent masking voice in the opposite ear. This experiment examined performance in a dichotic listening task with a target talker in one ear and same-talker, same-sex, or different-sex maskers in both the target and the unattended ears. The results indicate that listeners were most susceptible to across-ear interference with a different-sex within-ear masker and least susceptible with a same-talker within-ear masker, suggesting that the amount of across-ear interference cannot be predicted from the difficulty of selectively attending to the within-ear masking voice. The results also show that the amount of across-ear interference consistently increases when the across-ear masking voice is more similar to the target speech than the within-ear masking voice is, but that no corresponding decline in across-ear interference occurs when the across-ear voice is less similar to the target than the within-ear voice. These results are consistent with an "integrated strategy" model of speech perception where the listener chooses a segregation strategy based on the characteristics of the masker present in the target ear and the amount of across-ear interference is determined by the extent to which this strategy can also effectively be used to suppress the masker in the unattended ear.


Subject(s)
Auditory Perception/physiology , Ear/physiology , Hearing/physiology , Perceptual Masking , Attention , Dichotic Listening Tests , Functional Laterality , Humans , Loudness Perception , Psychoacoustics , Sound Spectrography , Speech Acoustics
15.
Percept Psychophys ; 69(1): 79-91, 2007 Jan.
Article in English | MEDLINE | ID: mdl-17515218

ABSTRACT

A priori information about the location of the target talker plays a critical'role in cocktailparty listening tasks, but little is known about the influence of imperfect spatial information in situations in which the listener has some knowledge about the location of the target speech but does not know its exact location prior to hearing the stimulus. In this study, spatial uncertainty was varied by adjusting the probability that the target talker in a multitalker stimulus would change locations at the end of each trial. The results show that listeners can adapt their strategies according to the statistical properties of a dynamic acoustic environment but that this adaptation is a relatively slow process that may require dozens of trials to complete.


Subject(s)
Attention , Perceptual Masking , Social Environment , Sound Localization , Speech Perception , Adaptation, Psychological , Cues , Humans , Orientation , Psychoacoustics , Reference Values , Speech Acoustics , Voice Quality
16.
J Acoust Soc Am ; 119(4): 2327-33, 2006 Apr.
Article in English | MEDLINE | ID: mdl-16642846

ABSTRACT

When listening to natural speech, listeners are fairly adept at using cues such as pitch, vocal tract length, prosody, and level differences to extract a target speech signal from an interfering speech masker. However, little is known about the cues that listeners might use to segregate synthetic speech signals that retain the intelligibility characteristics of speech but lack many of the features that listeners normally use to segregate competing talkers. In this experiment, intelligibility was measured in a diotic listening task that required the segregation of two simultaneously presented synthetic sentences. Three types of synthetic signals were created: (1) sine-wave speech (SWS); (2) modulated noise-band speech (MNB); and (3) modulated sine-band speech (MSB). The listeners performed worse for all three types of synthetic signals than they did with natural speech signals, particularly at low signal-to-noise ratio (SNR) values. Of the three synthetic signals, the results indicate that SWS signals preserve more of the voice characteristics used for speech segregation than MNB and MSB signals. These findings have implications for cochlear implant users, who rely on signals very similar to MNB speech and thus are likely to have difficulty understanding speech in cocktail-party listening environments.


Subject(s)
Attention , Communication Aids for Disabled , Functional Laterality , Perceptual Masking , Speech Acoustics , Speech Perception , Adult , Female , Humans , Male , Middle Aged , Sound Spectrography , Speech Intelligibility , Voice Quality
17.
J Acoust Soc Am ; 120(6): 4007-18, 2006 Dec.
Article in English | MEDLINE | ID: mdl-17225427

ABSTRACT

When a target speech signal is obscured by an interfering speech wave form, comprehension of the target message depends both on the successful detection of the energy from the target speech wave form and on the successful extraction and recognition of the spectro-temporal energy pattern of the target out of a background of acoustically similar masker sounds. This study attempted to isolate the effects that energetic masking, defined as the loss of detectable target information due to the spectral overlap of the target and masking signals, has on multitalker speech perception. This was achieved through the use of ideal time-frequency binary masks that retained those spectro-temporal regions of the acoustic mixture that were dominated by the target speech but eliminated those regions that were dominated by the interfering speech. The results suggest that energetic masking plays a relatively small role in the overall masking that occurs when speech is masked by interfering speech but a much more significant role when speech is masked by interfering noise.


Subject(s)
Perceptual Masking , Speech , Adolescent , Adult , Female , Humans , Male , Middle Aged , Speech Perception , Time Factors
18.
J Acoust Soc Am ; 118(5): 3241-51, 2005 Nov.
Article in English | MEDLINE | ID: mdl-16334903

ABSTRACT

When a masking sound is spatially separated from a target speech signal, substantial releases from masking typically occur both for speech and noise maskers. However, when a delayed copy of the masker is also presented at the location of the target speech (a condition that has been referred to as the front target, right-front masker or F-RF configuration), the advantages of spatial separation vanish for noise maskers but remain substantial for speech maskers. This effect has been attributed to precedence, which introduces an apparent spatial separation between the target and masker in the F-RF configuration that helps the listener to segregate the target from a masking voice but not from a masking noise. In this study, virtual synthesis techniques were used to examine variations of the F-RF configuration in an attempt to more fully understand the stimulus parameters that influence the release from masking obtained in that condition. The results show that the release from speech-on-speech masking caused by the addition of the delayed copy of the masker is robust across a wide variety of source locations, masker locations, and masker delay values. This suggests that the speech unmasking that occurs in the F-RF configuration is not dependent on any single perceptual cue and may indicate that F-RF speech segregation is only partially based on the apparent left-right location of the RF masker.


Subject(s)
Acoustics , Environment , Speech Perception/physiology , User-Computer Interface , Acoustic Stimulation , Adult , Female , Humans , Male , Middle Aged , Perceptual Masking/physiology
19.
Hum Factors ; 47(1): 188-98, 2005.
Article in English | MEDLINE | ID: mdl-15960096

ABSTRACT

The effect of hearing protection devices (HPDs) on sound localization was examined in the context of an auditory-cued visual search task. Participants were required to locate and identify a visual target in a field of 5, 20, or 50 visual distractors randomly distributed on the interior surface of a sphere. Four HPD conditions were examined: earplugs, earmuffs, both earplugs and earmuffs simultaneously (double hearing protection), and no hearing protection. In addition, there was a control condition in which no auditory cue was provided. A repeated measures analysis of variance revealed significant main effects of HPD for both search time and head motion data (p < .05), indicating that the degree to which localization is disrupted by HPDs varies with the type of device worn. When both earplugs and earmuffs are worn simultaneously, search times and head motion are more similar to those found when no auditory cue is provided than when either earplugs or earmuffs alone are worn, suggesting that sound localization cues are so severely disrupted by double hearing protection the listener can recover little or no information regarding the direction of sound source origin. Potential applications of this research include high-noise military, aerospace, and industrial settings in which HPDs are necessary but wearing double protection may compromise safety and/or performance.


Subject(s)
Auditory Perception/physiology , Auditory Threshold/physiology , Ear Protective Devices/standards , Noise, Occupational/adverse effects , Sound Localization/physiology , Adolescent , Adult , Analysis of Variance , Equipment Design , Equipment Safety , Female , Hearing Loss, Noise-Induced/prevention & control , Humans , Male , Occupational Diseases/prevention & control , Probability , Reference Values
20.
J Acoust Soc Am ; 117(1): 292-304, 2005 Jan.
Article in English | MEDLINE | ID: mdl-15704422

ABSTRACT

Recent results have shown that listeners attending to the quieter of two speech signals in one ear (the target ear) are highly susceptible to interference from normal or time-reversed speech signals presented in the unattended ear. However, speech-shaped noise signals have little impact on the segregation of speech in the opposite ear. This suggests that there is a fundamental difference between the across-ear interference effects of speech and nonspeech signals. In this experiment, the intelligibility and contralateral-ear masking characteristics of three synthetic speech signals with parametrically adjustable speech-like properties were examined: (1) a modulated noise-band (MNB) speech signal composed of fixed-frequency bands of envelope-modulated noise; (2) a modulated sine-band (MSB) speech signal composed of fixed-frequency amplitude-modulated sinewaves; and (3) a "sinewave speech" signal composed of sine waves tracking the first four formants of speech. In all three cases, a systematic decrease in performance in the two-talker target-ear listening task was found as the number of bands in the contralateral speech-like masker increased. These results suggest that speech-like fluctuations in the spectral envelope of a signal play an important role in determining the amount of across-ear interference that a signal will produce in a dichotic cocktail-party listening task.


Subject(s)
Dichotic Listening Tests , Ear, Middle/physiology , Social Environment , Speech Perception , Speech, Alaryngeal , Adult , Female , Humans , Male , Middle Aged , Perceptual Masking/physiology
SELECTION OF CITATIONS
SEARCH DETAIL
...