Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 5 de 5
Filter
Add more filters










Database
Language
Publication year range
1.
Cogn Sci ; 47(7): e13314, 2023 07.
Article in English | MEDLINE | ID: mdl-37462237

ABSTRACT

In the first year of life, infants' speech perception becomes attuned to the sounds of their native language. This process of early phonetic learning has traditionally been framed as phonetic category acquisition. However, recent studies have hypothesized that the attunement may instead reflect a perceptual space learning process that does not involve categories. In this article, we explore the idea of perceptual space learning by implementing five different perceptual space learning models and testing them on three phonetic contrasts that have been tested in the infant speech perception literature. We reproduce and extend previous results showing that a perceptual space learning model that uses only distributional information about the acoustics of short time slices of speech can account for at least some crosslinguistic differences in infant perception. Moreover, we find that a second perceptual space learning model, which benefits from word-level guidance. performs equally well in capturing crosslinguistic differences in infant speech perception. These results provide support for the general idea of perceptual space learning as a theory of early phonetic learning but suggest that more fine-grained data are needed to distinguish between different formal accounts. Finally, we provide testable empirical predictions of the two most promising models and show that these are not identical, making it possible to independently evaluate each model in experiments with infants in future research.


Subject(s)
Language Development , Speech Perception , Humans , Infant , Phonetics , Language , Spatial Learning , Computer Simulation
2.
Proc Natl Acad Sci U S A ; 118(7)2021 02 09.
Article in English | MEDLINE | ID: mdl-33510040

ABSTRACT

Before they even speak, infants become attuned to the sounds of the language(s) they hear, processing native phonetic contrasts more easily than nonnative ones. For example, between 6 to 8 mo and 10 to 12 mo, infants learning American English get better at distinguishing English and [l], as in "rock" vs. "lock," relative to infants learning Japanese. Influential accounts of this early phonetic learning phenomenon initially proposed that infants group sounds into native vowel- and consonant-like phonetic categories-like and [l] in English-through a statistical clustering mechanism dubbed "distributional learning." The feasibility of this mechanism for learning phonetic categories has been challenged, however. Here, we demonstrate that a distributional learning algorithm operating on naturalistic speech can predict early phonetic learning, as observed in Japanese and American English infants, suggesting that infants might learn through distributional learning after all. We further show, however, that, contrary to the original distributional learning proposal, our model learns units too brief and too fine-grained acoustically to correspond to phonetic categories. This challenges the influential idea that what infants learn are phonetic categories. More broadly, our work introduces a mechanism-driven approach to the study of early phonetic learning, together with a quantitative modeling framework that can handle realistic input. This allows accounts of early phonetic learning to be linked to concrete, systematic predictions regarding infants' attunement.


Subject(s)
Language Development , Models, Neurological , Natural Language Processing , Phonetics , Humans , Speech Perception , Speech Recognition Software
3.
Open Mind (Camb) ; 5: 113-131, 2021.
Article in English | MEDLINE | ID: mdl-35024527

ABSTRACT

Early changes in infants' ability to perceive native and nonnative speech sound contrasts are typically attributed to their developing knowledge of phonetic categories. We critically examine this hypothesis and argue that there is little direct evidence of category knowledge in infancy. We then propose an alternative account in which infants' perception changes because they are learning a perceptual space that is appropriate to represent speech, without yet carving up that space into phonetic categories. If correct, this new account has substantial implications for understanding early language development.

4.
J Acoust Soc Am ; 143(5): EL372, 2018 05.
Article in English | MEDLINE | ID: mdl-29857692

ABSTRACT

Theories of cross-linguistic phonetic category perception posit that listeners perceive foreign sounds by mapping them onto their native phonetic categories, but, until now, no way to effectively implement this mapping has been proposed. In this paper, Automatic Speech Recognition systems trained on continuous speech corpora are used to provide a fully specified mapping between foreign sounds and native categories. The authors show how the machine ABX evaluation method can be used to compare predictions from the resulting quantitative models with empirically attested effects in human cross-linguistic phonetic category perception.


Subject(s)
Language , Neural Networks, Computer , Phonetics , Speech Perception , Speech Recognition Software/classification , Humans , Speech Perception/physiology
5.
Psychol Sci ; 26(3): 341-7, 2015 Mar.
Article in English | MEDLINE | ID: mdl-25630443

ABSTRACT

Infants learn language at an incredible speed, and one of the first steps in this voyage is learning the basic sound units of their native languages. It is widely thought that caregivers facilitate this task by hyperarticulating when speaking to their infants. Using state-of-the-art speech technology, we addressed this key theoretical question: Are sound categories clearer in infant-directed speech than in adult-directed speech? A comprehensive examination of sound contrasts in a large corpus of recorded, spontaneous Japanese speech demonstrates that there is a small but significant tendency for contrasts in infant-directed speech to be less clear than those in adult-directed speech. This finding runs contrary to the idea that caregivers actively enhance phonetic categories in infant-directed speech. These results suggest that to be plausible, theories of infants' language acquisition must posit an ability to learn from noisy data.


Subject(s)
Mother-Child Relations , Speech Perception , Female , Humans , Infant , Japan , Language Development , Mothers , Phonetics , Speech Acoustics
SELECTION OF CITATIONS
SEARCH DETAIL
...