Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 44
Filter
Add more filters










Publication year range
1.
Am J Audiol ; : 1-13, 2024 Jun 19.
Article in English | MEDLINE | ID: mdl-38896881

ABSTRACT

PURPOSE: The purpose of this study was to determine whether providing realistic auditory or somatosensory cues to spatial location would affect measures of vestibulo-ocular reflex gain in a rotary chair testing (RCT) context. METHOD: This was a fully within-subject design. Thirty young adults age 18-30 years (16 men, 14 women by self-identification) completed sinusoidal harmonic acceleration testing in a rotary chair under five different conditions, each at three rotational frequencies (0.01, 0.08, and 0.32 Hz). We recorded gain as the ratio of the amplitude of eye movement to chair movement using standard clinical procedures. The five conditions consisted of two without spatial information (silence, tasking via headphones) and three with either auditory (refrigerator sound, tasking via speaker) or somatosensory (fan) information. Two of the conditions also included mental tasking (tasking via headphones, tasking via speaker) and differed only in terms of the spatial localizability of the verbal instructions. We used linear mixed-effects modeling to compare pairs of conditions, specifically examining the effects of the availability of spatial cues in the environment. This study was preregistered on Open Science Framework (https://osf.io/2gqcf/). RESULTS: Results showed significant effects of frequency in all conditions (p < .05), but the only pairs of conditions that were significantly different were those including tasking in one condition but not the other (e.g., tasking via headphones vs. silence). Post hoc equivalence testing showed that the lack of significance in the other comparisons could be confirmed as not meaningfully different. CONCLUSIONS: These findings suggest that the presence of externally localizable sensory information, whether auditory or somatosensory, does not affect measures of gain in RCT to any relevant degree. However, these findings also contribute to the increasing body of evidence suggesting that mental engagement ("tasking") does increase gain whether or not it is provided via localizable instructions.

2.
Brain Res Bull ; 210: 110923, 2024 May.
Article in English | MEDLINE | ID: mdl-38462137

ABSTRACT

Currently, we face an exponentially increasing interest in immersion, especially sensory-driven immersion, mainly due to the rapid development of ideas and business models centered around a digital virtual universe as well as the increasing availability of affordable immersive technologies for education, communication, and entertainment. However, a clear definition of 'immersion', in terms of established neurocognitive concepts and measurable properties, remains elusive, slowing research on the human side of immersive interfaces. To address this problem, we propose a conceptual, taxonomic model of attention in immersion. We argue (a) modeling immersion theoretically as well as studying immersion experimentally requires a detailed characterization of the role of attention in immersion, even though (b) attention, while necessary, cannot be a sufficient condition for defining immersion. Our broader goal is to characterize immersion in terms that will be compatible with established psychophysiolgical measures that could then in principle be used for the assessment and eventually the optimization of an immersive experience. We start from the perspective that immersion requires the projection of attention to an induced reality, and build on accepted taxonomies of different modes of attention for the development of our two-competitor model. The two-competitor model allows for a quantitative implementation and has an easy graphical interpretation. It helps to highlight the important link between different modes of attention and affect in studying immersion.


Subject(s)
Virtual Reality , Humans
3.
JASA Express Lett ; 4(1)2024 Jan 01.
Article in English | MEDLINE | ID: mdl-38189672

ABSTRACT

This study was designed to investigate the relationship between sound level and autonomic arousal using acoustic signals similar in level and acoustic properties to common sounds in the built environment. Thirty-three young adults were exposed to background sound modeled on ventilation equipment noise presented at levels ranging from 35 to 75 dBA sound pressure level (SPL) in 2 min blocks while they sat and read quietly. Autonomic arousal was measured in terms of skin conductance level. Results suggest that there is a direct relationship between sound level and arousal, even at these realistic levels. However, the effect of habituation appears to be more important overall.


Subject(s)
Galvanic Skin Response , Sound , Young Adult , Humans , Acoustics , Arousal , Autonomic Nervous System
4.
Am J Speech Lang Pathol ; 32(2): 506-522, 2023 03 09.
Article in English | MEDLINE | ID: mdl-36638359

ABSTRACT

PURPOSE: Hypokinetic dysarthria associated with Parkinson's disease (PD) is characterized by dysprosody, yet the literature is mixed with respect to how dysprosody affects the ability to mark lexical stress, possibly due to differences in speech tasks used to assess lexical stress. The purpose of this study was to compare how people with and without PD modulate acoustic dimensions of lexical stress-fundamental frequency, intensity, and duration-to mark lexical stress across three different speech tasks. METHOD: Twelve individuals with mild-to-moderate idiopathic PD and 12 age- and sex-matched older adult controls completed three speech tasks: picture description, word production in isolation, and word production in lists. Outcome measures were the fundamental frequency, intensity, and duration of the vocalic segments of two trochees (initial stress) and two iambs (final stress) spoken in all three tasks. RESULTS: There were very few group differences. Both groups marked trochees by modulating intensity and fundamental frequency and iambs by modulating duration. Task had a significant impact on the stress patterns used by both groups. Stress patterns were most differentiated in words produced in isolation and least differentiated in lists of words. CONCLUSIONS: People with PD did not demonstrate impairments in the production of lexical stress, suggesting that dysprosody associated with PD does not impact all types of prosody in the same way. However, there were reduced distinctions in stress marking that were more apparent in trochees than iambs. In addition, the task used to assess prosody has a significant effect on all acoustic measures. Future research should focus on the use of connected speech tasks to obtain more generalizable measures of prosody in PD.


Subject(s)
Parkinson Disease , Speech Perception , Humans , Aged , Speech , Parkinson Disease/complications , Parkinson Disease/diagnosis , Dysarthria/etiology , Dysarthria/complications , Acoustics
5.
J Acoust Soc Am ; 152(5): 3107, 2022 11.
Article in English | MEDLINE | ID: mdl-36456295

ABSTRACT

Previous research suggests that learning to use a phonetic property [e.g., voice-onset-time, (VOT)] for talker identity supports a left ear processing advantage. Specifically, listeners trained to identify two "talkers" who only differed in characteristic VOTs showed faster talker identification for stimuli presented to the left ear compared to that presented to the right ear, which is interpreted as evidence of hemispheric lateralization consistent with task demands. Experiment 1 (n = 97) aimed to replicate this finding and identify predictors of performance; experiment 2 (n = 79) aimed to replicate this finding under conditions that better facilitate observation of laterality effects. Listeners completed a talker identification task during pretest, training, and posttest phases. Inhibition, category identification, and auditory acuity were also assessed in experiment 1. Listeners learned to use VOT for talker identity, which was positively associated with auditory acuity. Talker identification was not influenced by ear of presentation, and Bayes factors indicated strong support for the null. These results suggest that talker-specific phonetic variation is not sufficient to induce a left ear advantage for talker identification; together with the extant literature, this instead suggests that hemispheric lateralization for talker-specific phonetic variation requires phonetic variation to be conditioned on talker differences in source characteristics.


Subject(s)
Cues , Phonetics , Bayes Theorem , Auditory Perception , Discrimination, Psychological
6.
J Acoust Soc Am ; 152(3): 1375, 2022 09.
Article in English | MEDLINE | ID: mdl-36182286

ABSTRACT

A wide variety of research and clinical assessments involve presenting speech stimuli in the presence of some kind of noise. Here, I selectively review two theoretical perspectives and discuss ways in which these perspectives may help researchers understand the consequences for listeners of adding noise to a speech signal. I argue that adding noise changes more about the listening task than merely making the signal more difficult to perceive. To fully understand the effects of an added noise on speech perception, we must consider not just how much the noise affects task difficulty, but also how it affects all of the systems involved in understanding speech: increasing message uncertainty, modifying attentional demand, altering affective response, and changing motivation to perform the task.


Subject(s)
Noise , Speech Perception , Auditory Perception , Noise/adverse effects , Speech , Speech Perception/physiology
7.
J Acoust Soc Am ; 150(4): 3149, 2021 10.
Article in English | MEDLINE | ID: mdl-34717455

ABSTRACT

Exposure to noise-or unwanted sound-is considered a major public health issue in the United States and internationally. Previous work has shown that even acute noise exposure can influence physiological response in humans and that individuals differ markedly in their susceptibility to noise. Recent research also suggests that specific acoustic properties of noise may have distinct effects on human physiological response. Much of the existing research on physiological response to noise consists of laboratory studies using very simple acoustic stimuli-like white noise or tone bursts-or field studies of longer-term workplace noise exposure that may neglect acoustic properties of the noise entirely. By using laboratory exposure to realistic heating, ventilation, and air conditioning (HVAC) noise, the current study explores the interaction between acoustic properties of annoying noise and individual response to working in occupational noise. This study assessed autonomic response to two acoustically distinct noises while participants performed cognitively demanding work. Results showed that the two HVAC noises affected physiological arousal in different ways. Individual differences in physiological response to noise as a function of noise sensitivity were also observed. Further research is necessary to link specific acoustic characteristics with differential physiological responses in humans.


Subject(s)
Air Conditioning , Noise, Occupational , Acoustic Stimulation , Heating , Humans , Ventilation
8.
J Gerontol A Biol Sci Med Sci ; 76(9): e213-e220, 2021 08 13.
Article in English | MEDLINE | ID: mdl-33929532

ABSTRACT

BACKGROUND: Hearing loss is associated with a greater risk of death in older adults. This relationship has been attributed to an increased risk of injury, particularly due to falling, in individuals with hearing loss. However, the link between hearing loss and mortality across the life span is less clear. METHODS: We used structural equation modeling and mediation analysis to investigate the relationship between hearing loss, falling, injury, and mortality across the adult life span in public-use data from the National Health Interview Survey and the National Death Index. We examined (a) the association between self-reported hearing problems and later mortality, (b) the associations between self-reported hearing problems and the risk of injury and degree and type of injury, (c) the mediating role of falling and injury in the association between self-reported hearing problems and mortality, and (d) whether these relationships differ in young (18-39), middle-aged (40-59), and older (60+) age groups. RESULTS: In all 3 age ranges, those reporting hearing problems were more likely to fall, were more likely to sustain an injury, and were more likely to sustain a serious injury, than those not reporting hearing problems. While there was no significant association between hearing loss and mortality in the youngest category, there was for middle-aged and older participants, and for both, the fall-related injury was a significant mediator in this relationship. CONCLUSIONS: Fall-related injury mediates the relationship between hearing loss and mortality for middle-aged as well as older adults, suggesting a need for further research into mechanisms and remediation.


Subject(s)
Accidental Falls , Hearing Loss , Aged , Hearing Loss/epidemiology , Humans , Latent Class Analysis , Middle Aged , Self Report
9.
Atten Percept Psychophys ; 83(4): 1818-1841, 2021 May.
Article in English | MEDLINE | ID: mdl-33438149

ABSTRACT

Listeners vary in their ability to understand speech in adverse conditions. Differences in both cognitive and linguistic capacities play a role, but increasing evidence suggests that such factors may contribute differentially depending on the listening challenge. Here, we used multilevel modeling to evaluate contributions of individual differences in age, hearing thresholds, vocabulary, selective attention, working memory capacity, personality traits, and noise sensitivity to variability in measures of comprehension and listening effort in two listening conditions. A total of 35 participants completed a battery of cognitive and linguistic tests as well as a spoken story comprehension task using (1) native-accented English speech masked by speech-shaped noise and (2) nonnative accented English speech without masking. Masker levels were adjusted individually to ensure each participant would show (close to) equivalent word recognition performance across the two conditions. Dependent measures included comprehension tests results, self-rated effort, and electrodermal, cardiovascular, and facial electromyographic measures associated with listening effort. Results showed varied patterns of responsivity across different dependent measures as well as across listening conditions. In particular, results suggested that working memory capacity may play a greater role in the comprehension of nonnative accented speech than noise-masked speech, while hearing acuity and personality may have a stronger influence on physiological responses affected by demands of understanding speech in noise. Furthermore, electrodermal measures may be more strongly affected by affective response to noise-related interference while cardiovascular responses may be more strongly affected by demands on working memory and lexical access.


Subject(s)
Speech Perception , Auditory Perception , Humans , Noise , Self Report , Speech
10.
Annu Int Conf IEEE Eng Med Biol Soc ; 2020: 1799-1802, 2020 07.
Article in English | MEDLINE | ID: mdl-33018348

ABSTRACT

This paper presents a proof-of-concept for contactless and nonintrusive estimation of electrodermal activity (EDA) correlates using a camera. RGB video of the palm under three different lighting conditions showed that for a suitably chosen illumination strategy the data from the camera is sufficient to estimate EDA correlates which agree with the measurements done using laboratory grade physiological sensors. The effects we see in the recorded video can be attributed to sweat gland activity, which inturn is known to be correlated with EDA. These effects are so pronounced that simple pixel statistics can be used to quantify them. Such a method benefits from advances in computer vision and graphics research and has the potential to be used in affective computing and psychophysiology research where contact based sensors may not be suitable.


Subject(s)
Galvanic Skin Response , Psychophysiology , Hand , Sweat Glands
11.
J Autism Dev Disord ; 50(11): 4183-4190, 2020 Nov.
Article in English | MEDLINE | ID: mdl-32219633

ABSTRACT

Dysregulation of the autonomic nervous system (ANS), which can be indexed by heart rate variability (HRV), has been posited to contribute to core features of autism spectrum disorder (ASD). However, the relationship between ASD and HRV remains uncertain. We assessed tonic and phasic HRV of 21 children with ASD and 21 age- and IQ-matched typically developing (TD) children and examined (1) group differences in HRV and (2) associations between HRV and ASD symptomatology. Children with ASD showed significantly lower tonic HRV, but similar phasic HRV compared to TD children. Additionally, reduced tonic HRV was associated with atypical attentional responsivity in ASD. Our findings suggest ANS dysregulation is present in ASD and may contribute to atypical attentional responses to sensory stimulation.


Subject(s)
Autism Spectrum Disorder/physiopathology , Heart Rate , Autonomic Nervous System/physiopathology , Child , Female , Humans , Male
12.
Wiley Interdiscip Rev Cogn Sci ; 11(1): e1514, 2020 Jan.
Article in English | MEDLINE | ID: mdl-31381275

ABSTRACT

Listening effort is increasingly recognized as a factor in communication, particularly for and with nonnative speakers, for the elderly, for individuals with hearing impairment and/or for those working in noise. However, as highlighted by McGarrigle et al., International Journal of Audiology, 2014, 53, 433-445, the term "listening effort" encompasses a wide variety of concepts, including the engagement and control of multiple possibly distinct neural systems for information processing, and the affective response to the expenditure of those resources in a given context. Thus, experimental or clinical methods intended to objectively quantify listening effort may ultimately reflect a complex interaction between the operations of one or more of those information processing systems, and/or the affective and motivational response to the demand on those systems. Here we examine theoretical, behavioral, and psychophysiological factors related to resolving the question of what we are measuring, and why, when we measure "listening effort." This article is categorized under: Linguistics > Language in Mind and Brain Psychology > Theory and Methods Psychology > Attention Psychology > Emotion and Motivation.


Subject(s)
Attention , Cognition/physiology , Emotions/physiology , Humans
13.
J Speech Lang Hear Res ; 62(8): 2872-2881, 2019 08 15.
Article in English | MEDLINE | ID: mdl-31339788

ABSTRACT

Objective The purpose of this study was to explore the hypothesis that the relationship between hearing problems and cardiovascular disease (CVD) includes a connection to psychological distress. Design We used generalized structural equation modeling to assess relationships between self-reported measures of hearing problems, psychological distress, and CVD in a pooled sample of 623,416 adult respondents in the 1997-2017 National Health Interview Survey. Hearing status without hearing aids was self-reported on an ordinal scale and further grouped for this study into 3 categories (excellent or good hearing, little or moderate trouble hearing, and a lot of trouble or deaf). Six CVDs (stroke, angina pectoris, hypertension, heart attack, coronary heart disease, or other heart condition/disease) were incorporated as a latent variable. Psychological distress was evaluated by the Kessler 6 Scale (Kessler et al., 2010). All estimates were population weighted, and standard errors were adjusted for a complex survey design. Results Nearly 83% reported excellent or good hearing, 14% reported a little or moderate trouble hearing, and 3% reported a lot of trouble hearing or said they were deaf. Hearing problems were positively associated with CVD. Relative to those reporting excellent/good hearing, adults reporting trouble hearing had a higher probability of CVD. Hearing problems were also significantly associated with psychological distress. When psychological distress was applied to the model, positive associations between hearing problems and CVD were attenuated but still significant. Results are consistent with the hypothesis that the connection between self-reported hearing problems and CVD is mediated through psychological distress. Conclusions The relationship between self-reported hearing problems and CVD is mediated by psychological distress. Further research is needed to identify causal pathways and psychophysiological mechanisms involved in this relationship and to identify effective methods for addressing cardiovascular health-related psychosocial factors in the treatment of hearing impairment.


Subject(s)
Cardiovascular Diseases/epidemiology , Hearing Loss/epidemiology , Psychological Distress , Stress, Psychological/epidemiology , Adult , Aged , Cardiovascular Diseases/complications , Cardiovascular Diseases/psychology , Female , Health Surveys , Hearing Loss/complications , Hearing Loss/psychology , Humans , Latent Class Analysis , Male , Middle Aged , Self Report , Stress, Psychological/complications , United States/epidemiology
14.
J Autism Dev Disord ; 49(10): 3999-4008, 2019 Oct.
Article in English | MEDLINE | ID: mdl-31201579

ABSTRACT

Despite early differences in orienting to sounds, no study to date has investigated whether children with ASD demonstrate impairments in attentional disengagement in the auditory modality. Twenty-one 9-15-year-old children with ASD and 20 age- and IQ-matched TD children were presented with an auditory gap-overlap paradigm. Evidence of impaired disengagement in ASD was mixed. Differences in saccadic reaction time for overlap and gap conditions did not differ between groups. However, children with ASD did show increased no-shift trials in the overlap condition, as well as reduced disengagement efficiency compared to their TD peers. These results provide further support for disengagement impairments in ASD, and suggest that these deficits include disengaging from and shifting to unimodal auditory information.


Subject(s)
Attention , Autism Spectrum Disorder/physiopathology , Sound Localization , Child , Child, Preschool , Female , Humans , Male , Reaction Time , Saccades
15.
Hear Res ; 369: 103-119, 2018 11.
Article in English | MEDLINE | ID: mdl-30135023

ABSTRACT

When people make decisions about listening, such as whether to continue attending to a particular conversation or whether to wear their hearing aids to a particular restaurant, they do so on the basis of more than just their estimated performance. Recent research has highlighted the vital role of more subjective qualities such as effort, motivation, and fatigue. Here, we argue that the importance of these factors is largely mediated by a listener's emotional response to the listening challenge, and suggest that emotional responses to communication challenges may provide a crucial link between day-to-day communication stress and long-term health. We start by introducing some basic concepts from the study of emotion and affect. We then develop a conceptual framework to guide future research on this topic through examination of a variety of autonomic and peripheral physiological responses that have been employed to investigate both cognitive and affective phenomena related to challenging communication. We conclude by suggesting the need for further investigation of the links between communication difficulties, emotional response, and long-term health, and make some recommendations intended to guide future research on affective psychophysiology in speech communication.


Subject(s)
Attention , Emotions , Motivation , Psychophysiology/methods , Speech Perception , Auditory Pathways/physiology , Autonomic Nervous System/physiology , Choice Behavior , Cognition , Electrodiagnosis , Heart Function Tests , Humans , Memory , Neurologic Examination
16.
J Speech Lang Hear Res ; 61(7): 1815-1830, 2018 07 13.
Article in English | MEDLINE | ID: mdl-29971338

ABSTRACT

Purpose: The purpose of this study was to investigate the effects of 2nd language proficiency and linguistic uncertainty on performance and listening effort in mixed language contexts. Method: Thirteen native speakers of Dutch with varying degrees of fluency in English listened to and repeated sentences produced in both Dutch and English and presented in the presence of single-talker competing speech in both Dutch and English. Target and masker language combinations were presented in both blocked and mixed (unpredictable) conditions. In the blocked condition, in each block of trials the target-masker language combination remained constant, and the listeners were informed of both prior to beginning the block. In the mixed condition, target and masker language varied randomly from trial to trial. All listeners participated in all conditions. Performance was assessed in terms of speech reception thresholds, whereas listening effort was quantified in terms of pupil dilation. Results: Performance (speech reception thresholds) and listening effort (pupil dilation) were both affected by 2nd language proficiency (English test score) and target and masker language: Performance was better in blocked as compared to mixed conditions, with Dutch as compared to English targets, and with English as compared to Dutch maskers. English proficiency was correlated with listening performance. Listeners also exhibited greater peak pupil dilation in mixed as compared to blocked conditions for trials with Dutch maskers, whereas pupil dilation during preparation for speaking was higher for English targets as compared to Dutch ones in almost all conditions. Conclusions: Both listener's proficiency in a 2nd language and uncertainty about the target language on a given trial play a significant role in how bilingual listeners attend to speech in the presence of competing speech in different languages, but precise effects also depend on which language is serving as target and which as masker.


Subject(s)
Linguistics , Mental Competency , Multilingualism , Speech Perception , Uncertainty , Acoustic Stimulation/methods , Adult , Female , Humans , Language , Male , Netherlands , Perceptual Masking , Speech Reception Threshold Test , Young Adult
17.
Cogn Affect Behav Neurosci ; 17(4): 809-825, 2017 08.
Article in English | MEDLINE | ID: mdl-28567568

ABSTRACT

In recent years, there has been increasing interest in studying listening effort. Research on listening effort intersects with the development of active theories of speech perception and contributes to the broader endeavor of understanding speech perception within the context of neuroscientific theories of perception, attention, and effort. Due to the multidisciplinary nature of the problem, researchers vary widely in their precise conceptualization of the catch-all term listening effort. Very recent consensus work stresses the relationship between listening effort and the allocation of cognitive resources, providing a conceptual link to current cognitive neuropsychological theories associating effort with the allocation of selective attention. By linking listening effort to attentional effort, we enable the application of a taxonomy of external and internal attention to the characterization of effortful listening. More specifically, we use a vectorial model to decompose the demand causing listening effort into its mutually orthogonal external and internal components and map the relationship between demanded and exerted effort by means of a resource-limiting term that can represent the influence of motivation as well as vigilance and arousal. Due to its quantitative nature and easy graphical interpretation, this model can be applied to a broad range of problems dealing with listening effort. As such, we conclude that the model provides a good starting point for further research on effortful listening within a more differentiated neuropsychological framework.


Subject(s)
Attention , Auditory Perception , Models, Psychological , Algorithms , Humans
18.
Lang Speech ; 60(1): 3-26, 2017 03.
Article in English | MEDLINE | ID: mdl-28326991

ABSTRACT

Native speakers of Spanish with different amounts of experience with English classified stop-consonant voicing (/b/ versus /p/) across different speech accents: English-accented Spanish, native Spanish, and native English. While listeners with little experience with English classified target voicing with an English- or Spanish-like voice onset time (VOT) boundary, predicted by contextual VOT, listeners familiar with English relied on an English-like VOT boundary in an English-accented Spanish context even in the absence of clear contextual cues to English VOT. This indicates that Spanish listeners accommodated English-accented Spanish voicing differently depending on their degree of familiarization with the English norm.


Subject(s)
Adaptation, Psychological , Cues , Multilingualism , Phonetics , Speech Acoustics , Speech Perception , Voice Quality , Acoustic Stimulation , Adult , Humans , Reaction Time , Time Factors , Visual Perception
19.
Front Psychol ; 7: 263, 2016.
Article in English | MEDLINE | ID: mdl-26973564

ABSTRACT

Typically, understanding speech seems effortless and automatic. However, a variety of factors may, independently or interactively, make listening more effortful. Physiological measures may help to distinguish between the application of different cognitive mechanisms whose operation is perceived as effortful. In the present study, physiological and behavioral measures associated with task demand were collected along with behavioral measures of performance while participants listened to and repeated sentences. The goal was to measure psychophysiological reactivity associated with three degraded listening conditions, each of which differed in terms of the source of the difficulty (distortion, energetic masking, and informational masking), and therefore were expected to engage different cognitive mechanisms. These conditions were chosen to be matched for overall performance (keywords correct), and were compared to listening to unmasked speech produced by a natural voice. The three degraded conditions were: (1) Unmasked speech produced by a computer speech synthesizer, (2) Speech produced by a natural voice and masked byspeech-shaped noise and (3) Speech produced by a natural voice and masked by two-talker babble. Masked conditions were both presented at a -8 dB signal to noise ratio (SNR), a level shown in previous research to result in comparable levels of performance for these stimuli and maskers. Performance was measured in terms of proportion of key words identified correctly, and task demand or effort was quantified subjectively by self-report. Measures of psychophysiological reactivity included electrodermal (skin conductance) response frequency and amplitude, blood pulse amplitude and pulse rate. Results suggest that the two masked conditions evoked stronger psychophysiological reactivity than did the two unmasked conditions even when behavioral measures of listening performance and listeners' subjective perception of task demand were comparable across the three degraded conditions.

20.
J Acoust Soc Am ; 136(5): 2827-38, 2014 Nov.
Article in English | MEDLINE | ID: mdl-25373982

ABSTRACT

Both long-term native language experience and immediate linguistic expectations can affect listeners' use of acoustic information when making a phonetic decision. In this study, a Garner selective attention task was used to investigate differences in attention to consonants and tones by American English-speaking listeners (N = 20) and Mandarin Chinese-speaking listeners hearing speech in either American English (N = 17) or Mandarin Chinese (N = 20). To minimize the effects of lexical differences and differences in the linguistic status of pitch across the two languages, stimuli and response conditions were selected such that all tokens constitute legitimate words in both languages and all responses required listeners to make decisions that were linguistically meaningful in their native language. Results showed that regardless of ambient language, Chinese listeners processed consonant and tone in a combined manner, consistent with previous research. In contrast, English listeners treated tones and consonants as perceptually separable. Results are discussed in terms of the role of sub-phonemic differences in acoustic cues across language, and the linguistic status of consonants and pitch contours in the two languages.


Subject(s)
Attention , Language , Phonetics , Speech Perception/physiology , Adolescent , Adult , China/ethnology , Discrimination, Psychological , Female , Humans , Learning , Male , Multilingualism , Pitch Discrimination/physiology , Pitch Perception/physiology , Psychomotor Performance , Time Factors , United States/ethnology , Young Adult
SELECTION OF CITATIONS
SEARCH DETAIL
...