Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 8 de 8
Filter
Add more filters










Database
Language
Publication year range
1.
Cortex ; 178: 213-222, 2024 Jul 10.
Article in English | MEDLINE | ID: mdl-39024939

ABSTRACT

Experiences with sound that make strong demands on the precision of perception, such as musical training and experience speaking a tone language, can enhance auditory neural encoding. Are high demands on the precision of perception necessary for training to drive auditory neural plasticity? Voice actors are an ideal subject population for answering this question. Voice acting requires exaggerating prosodic cues to convey emotion, character, and linguistic structure, drawing upon attention to sound, memory for sound features, and accurate sound production, but not fine perceptual precision. Here we assessed neural encoding of pitch using the frequency-following response (FFR), as well as prosody, music, and sound perception, in voice actors and a matched group of non-actors. We find that the consistency of neural sound encoding, prosody perception, and musical phrase perception are all enhanced in voice actors, suggesting that a range of neural and behavioural auditory processing enhancements can result from training which lacks fine perceptual precision. However, fine discrimination was not enhanced in voice actors but was linked to degree of musical experience, suggesting that low-level auditory processing can only be enhanced by demanding perceptual training. These findings suggest that training which taxes attention, memory, and production but is not perceptually taxing may be a way to boost neural encoding of sound and auditory pattern detection in individuals with poor auditory skills.

2.
Cognition ; 246: 105757, 2024 05.
Article in English | MEDLINE | ID: mdl-38442588

ABSTRACT

One of the most important auditory categorization tasks a listener faces is determining a sound's domain, a process which is a prerequisite for successful within-domain categorization tasks such as recognizing different speech sounds or musical tones. Speech and song are universal in human cultures: how do listeners categorize a sequence of words as belonging to one or the other of these domains? There is growing interest in the acoustic cues that distinguish speech and song, but it remains unclear whether there are cross-cultural differences in the evidence upon which listeners rely when making this fundamental perceptual categorization. Here we use the speech-to-song illusion, in which some spoken phrases perceptually transform into song when repeated, to investigate cues to this domain-level categorization in native speakers of tone languages (Mandarin and Cantonese speakers residing in the United Kingdom and China) and in native speakers of a non-tone language (English). We find that native tone-language and non-tone-language listeners largely agree on which spoken phrases sound like song after repetition, and we also find that the strength of this transformation is not significantly different across language backgrounds or countries of residence. Furthermore, we find a striking similarity in the cues upon which listeners rely when perceiving word sequences as singing versus speech, including small pitch intervals, flat within-syllable pitch contours, and steady beats. These findings support the view that there are certain widespread cross-cultural similarities in the mechanisms by which listeners judge if a word sequence is spoken or sung.


Subject(s)
Speech Perception , Speech , Humans , Cues , Language , Phonetics , Pitch Perception
3.
J Exp Psychol Hum Percept Perform ; 50(1): 119-138, 2024 Jan.
Article in English | MEDLINE | ID: mdl-38236259

ABSTRACT

A growing amount of attention has been given to examining the domain-general auditory processing of individual acoustic dimensions as a key driving force for adult L2 acquisition. Whereas auditory processing has traditionally been conceptualized as a bottom-up and encapsulated phenomenon, the interaction model (Kraus & Banai, 2007) proposes auditory processing as a set of perceptual, cognitive, and motoric abilities-the perception of acoustic details (acuity), the selection of relevant and irrelevant dimensions (attention), and the conversion of audio input into motor action (integration). To test this hypothesis, we examined the relationship between each component and the L2 outcomes of 102 adult Chinese speakers of English who varied in age, experience, and working memory background. According to the results of the statistical analyses, (a) the tests scores tapped into essentially distinct components of auditory processing (acuity, attention, and integration), and (b) these components played an equal role in explaining various aspects of L2 learning (phonology, morphosyntax) with large effects, even after biographical background and working memory were controlled for. (PsycInfo Database Record (c) 2024 APA, all rights reserved).


Subject(s)
Language Development , Language , Adult , Humans , Learning , Auditory Perception , Cognition
4.
J Exp Psychol Hum Percept Perform ; 48(12): 1410-1426, 2022 Dec.
Article in English | MEDLINE | ID: mdl-36442040

ABSTRACT

Recent evidence suggests that domain-general auditory processing (sensitivity to the spectro-temporal characteristics of sounds) helps determine individual differences in L2 speech acquisition outcomes. The current study examined the extent to which focused training could enhance auditory processing ability, and whether this had a concomitant impact on L2 vowel proficiency. A total of 98 Japanese learners of English were divided into four groups: (1) Auditory-Only (F2 discrimination training); (2) Phonetic-Only (English [æ] and [ʌ] identification training); (3) Auditory-Phonetic (a combination of auditory and phonetic training); and (4) Control training. The results showed that the Phonetic-Only group improved only their English [æ] and [ʌ] identification, while the Auditory-Only and Auditory-Phonetic groups enhanced both auditory and phonetic skills. The results suggest that a learner's auditory acuity to key, domain-general acoustic cues (F2 = 1200-1600 Hz) promotes the acquisition of knowledge about speech categories (English [æ] vs. [ʌ]). (PsycInfo Database Record (c) 2022 APA, all rights reserved).


Subject(s)
Language , Speech , Humans , Auditory Perception , Phonetics , Discrimination, Psychological
5.
Cognition ; 229: 105236, 2022 12.
Article in English | MEDLINE | ID: mdl-36027789

ABSTRACT

Growing evidence suggests a broad relationship between individual differences in auditory processing ability and the rate and ultimate attainment of language acquisition throughout the lifespan, including post-pubertal second language (L2) speech learning. However, little is known about how the precision of processing of specific auditory dimensions relates to the acquisition of specific L2 segmental contrasts. In the context of 100 late Japanese-English bilinguals with diverse profiles of classroom and immersion experience, the current study set out to investigate the link between the perception of several auditory dimensions (F3 frequency, F2 frequency, and duration) in non-verbal sounds and English [r]-[l] perception and production proficiency. Whereas participants' biographical factors (the presence/absence of immersion) accounted for a large amount of variance in the success of learning this contrast, the outcomes were also tied to their acuity to the most reliable, new auditory cues (F3 variation) and the less reliable but already-familiar cues (F2 variation). This finding suggests that individuals can vary in terms of how they perceive, utilize, and make the most of information conveyed by specific acoustic dimensions. When perceiving more naturalistic spoken input, where speech contrasts can be distinguished via a combination of numerous cues, some can attain a high-level of L2 speech proficiency by using nativelike and/or non-nativelike strategies in a complementary fashion.


Subject(s)
Multilingualism , Speech Perception , Auditory Perception , Humans , Language , Phonetics
6.
Neuroimage ; 252: 119024, 2022 05 15.
Article in English | MEDLINE | ID: mdl-35231629

ABSTRACT

To make sense of complex soundscapes, listeners must select and attend to task-relevant streams while ignoring uninformative sounds. One possible neural mechanism underlying this process is alignment of endogenous oscillations with the temporal structure of the target sound stream. Such a mechanism has been suggested to mediate attentional modulation of neural phase-locking to the rhythms of attended sounds. However, such modulations are compatible with an alternate framework, where attention acts as a filter that enhances exogenously-driven neural auditory responses. Here we attempted to test several predictions arising from the oscillatory account by playing two tone streams varying across conditions in tone duration and presentation rate; participants attended to one stream or listened passively. Attentional modulation of the evoked waveform was roughly sinusoidal and scaled with rate, while the passive response did not. However, there was only limited evidence for continuation of modulations through the silence between sequences. These results suggest that attentionally-driven changes in phase alignment reflect synchronization of slow endogenous activity with the temporal structure of attended stimuli.


Subject(s)
Auditory Cortex , Electroencephalography , Acoustic Stimulation/methods , Auditory Cortex/physiology , Auditory Perception/physiology , Caffeine , Electroencephalography/methods , Humans , Sound
7.
J Acoust Soc Am ; 150(6): 4474, 2021 12.
Article in English | MEDLINE | ID: mdl-34972283

ABSTRACT

The unprecedented lockdowns resulting from COVID-19 in spring 2020 triggered changes in human activities in public spaces. A predictive modeling approach was developed to characterize the changes in the perception of the sound environment when people could not be surveyed. Building on a database of soundscape questionnaires (N = 1,136) and binaural recordings (N = 687) collected in 13 locations across London and Venice during 2019, new recordings (N = 571) were made in the same locations during the 2020 lockdowns. Using these 30-s-long recordings, linear multilevel models were developed to predict the soundscape pleasantness ( R2=0.85) and eventfulness ( R2=0.715) during the lockdown and compare the changes for each location. The performance was above average for comparable models. An online listening study also investigated the change in the sound sources within the spaces. Results indicate (1) human sounds were less dominant and natural sounds more dominant across all locations; (2) contextual information is important for predicting pleasantness but not for eventfulness; (3) perception shifted toward less eventful soundscapes and to more pleasant soundscapes for previously traffic-dominated locations but not for human- and natural-dominated locations. This study demonstrates the usefulness of predictive modeling and the importance of considering contextual information when discussing the impact of sound level reductions on the soundscape.


Subject(s)
Acoustics , COVID-19 , Communicable Disease Control , Humans , SARS-CoV-2 , Sound
8.
Brain Lang ; 192: 15-24, 2019 05.
Article in English | MEDLINE | ID: mdl-30831377

ABSTRACT

There is a great deal of individual variability in outcome in second language learning, the sources of which are still poorly understood. We hypothesized that individual differences in auditory processing may account for some variability in second language learning. We tested this hypothesis by examining psychoacoustic thresholds, auditory-motor temporal integration, and auditory neural encoding in adult native Polish speakers living in the UK. We found that precise English vowel perception and accurate English grammatical judgment were linked to lower psychoacoustic thresholds, better auditory-motor integration, and more consistent frequency-following responses to sound. Psychoacoustic thresholds and neural sound encoding explained independent variance in vowel perception, suggesting that they are dissociable indexes of sound processing. These results suggest that individual differences in second language acquisition success stem at least in part from domain-general difficulties with auditory perception, and that auditory training could help facilitate language learning in some individuals with specific auditory impairments.


Subject(s)
Brain/physiology , Learning , Multilingualism , Speech Perception , Adult , Comprehension , Female , Humans , Male
SELECTION OF CITATIONS
SEARCH DETAIL
...