Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 20
Filter
Add more filters










Publication year range
1.
Brain Lang ; 242: 105279, 2023 07.
Article in English | MEDLINE | ID: mdl-37236016

ABSTRACT

Though perceptual narrowing has been widely recognized as a process guiding cognitive development and category learning in infancy and early childhood, its neural mechanisms and traits at a cortical level remain unclear. Using an electroencephalography (EEG) abstract mismatch negativity (MMN) paradigm, Australian infants' neural sensitivity to (native) English and (non-native) Nuu-Chah-Nulth speech contrasts was examined in a cross-sectional design at the onset (5-6 months) and offset (11-12 months) of perceptual narrowing. Immature mismatch responses (MMR) were observed among younger infants for both contrasts, while older infants showed MMR response to the non-native contrast, and both MMR and MMN to the native contrast. Sensitivity to the Nuu-Chah-Nulth contrast at perceptual narrowing offset was retained yet stayed immature. Findings conform to perceptual assimilation theories, reflecting plasticity in early speech perception and development. Compared to behavioural paradigms, neural examination effectively reveals experience-induced processing differences to subtle contrasts at the offset of perceptual narrowing.


Subject(s)
Speech Perception , Speech , Humans , Infant , Child, Preschool , Language Development , Cross-Sectional Studies , Australia , Electroencephalography , Speech Perception/physiology
2.
Front Psychol ; 13: 906848, 2022.
Article in English | MEDLINE | ID: mdl-35719494

ABSTRACT

Fundamental frequency (ƒ 0), perceived as pitch, is the first and arguably most salient auditory component humans are exposed to since the beginning of life. It carries multiple linguistic (e.g., word meaning) and paralinguistic (e.g., speakers' emotion) functions in speech and communication. The mappings between these functions and ƒ 0 features vary within a language and differ cross-linguistically. For instance, a rising pitch can be perceived as a question in English but a lexical tone in Mandarin. Such variations mean that infants must learn the specific mappings based on their respective linguistic and social environments. To date, canonical theoretical frameworks and most empirical studies do not view or consider the multi-functionality of ƒ 0, but typically focus on individual functions. More importantly, despite the eventual mastery of ƒ 0 in communication, it is unclear how infants learn to decompose and recognize these overlapping functions carried by ƒ 0. In this paper, we review the symbioses and synergies of the lexical, intonational, and emotional functions that can be carried by ƒ 0 and are being acquired throughout infancy. On the basis of our review, we put forward the Learnability Hypothesis that infants decompose and acquire multiple ƒ 0 functions through native/environmental experiences. Under this hypothesis, we propose representative cases such as the synergy scenario, where infants use visual cues to disambiguate and decompose the different ƒ 0 functions. Further, viable ways to test the scenarios derived from this hypothesis are suggested across auditory and visual modalities. Discovering how infants learn to master the diverse functions carried by ƒ 0 can increase our understanding of linguistic systems, auditory processing and communication functions.

3.
J Exp Psychol Learn Mem Cogn ; 48(10): 1542-1558, 2022 Oct.
Article in English | MEDLINE | ID: mdl-34370504

ABSTRACT

Auditory speech appears to be linked to visual articulatory gestures and orthography through different mechanisms. Yet, both types of visual information have a strong influence on speech processing. The present study directly compared their contributions to speech processing using a novel word learning paradigm. Native speakers of French, who were familiar with English, learned minimal pairs of novel English words containing the English /θ/-/f/ phonemic contrast under one of three exposure conditions: (a) the auditory forms of novel words alone, (b) the auditory forms associated with articulatory gestures, or (c) the auditory forms associated with orthography. The benefits of the three methods were compared during training and at two posttraining time points where the visual cues were no longer available. We also assessed participants' auditory-only discrimination of the /θ/-/f/ contrast pretraining and posttraining. During training, the visual cues facilitated novel word learning beyond the benefit of the auditory input alone. However, these additional benefits did not persist when participants' discrimination and novel word learning performance were assessed immediately after training. Most interestingly, after a night's sleep, participants who were exposed to orthography during training showed significant improvement in both discrimination and novel word learning compared to the previous day. The findings are discussed in terms of online versus residual impacts of articulatory gestures and orthography on speech processing. While both visual cues are beneficial when they are simultaneously presented with speech, only orthography shows residual impacts leading to a sleep-dependent enhancement of lexical knowledge through memory consolidation and retuning of the second language /θ/-/f/ contrast. (PsycInfo Database Record (c) 2022 APA, all rights reserved).


Subject(s)
Speech Perception , Speech , Humans , Gestures , Verbal Learning , Language , Cues
4.
J Acoust Soc Am ; 147(4): 2511, 2020 04.
Article in English | MEDLINE | ID: mdl-32359304

ABSTRACT

Vowel contrasts tend to be perceived independently of pitch modulation, but it is not known whether pitch can be perceived independently of vowel quality. This issue was investigated in the context of a lexical tone language, Mandarin Chinese, using a printed word version of the visual world paradigm. Eye movements to four printed words were tracked while listeners heard target words that differed from competitors only in tone (test condition) or also in onset consonant and vowel (control condition). Results showed that the timecourse of tone recognition is influenced by vowel quality for high, low, and rising tones. For these tones, the time for the eyes to converge on the target word in the test condition (relative to control) depended on the vowel with which the tone was coarticulated with /a/ and /i/ supporting faster recognition of high, low, and rising tones than /u/. These patterns are consistent with the hypothesis that tone-conditioned variation in the articulation of /a/ and /i/ facilitates rapid recognition of tones. The one exception to this general pattern-no effect of vowel quality on falling tone perception-may be due to fortuitous amplification of the harmonics relevant for pitch perception in this context.


Subject(s)
Phonetics , Speech Perception , Acoustic Stimulation , Language , Pitch Perception , Recognition, Psychology
5.
J Acoust Soc Am ; 139(1): EL1-5, 2016 Jan.
Article in English | MEDLINE | ID: mdl-26827050

ABSTRACT

This study examined three ways that perception of non-native phones may be uncategorized relative to native (L1) categories: focalized (predominantly similar to a single L1 category), clustered (similar to > 2 L1 categories), and dispersed (not similar to any L1 categories). In an online study, Egyptian Arabic speakers residing in Egypt categorized and rated all Australian English vowels. Evidence was found to support focalized, clustered, and dispersed uncategorized assimilations. Second-language (L2) category formation for uncategorized assimilations is predicted to depend upon the degree of perceptual overlap between the sets of L1 categories listeners use in assimilating each phone within an L2 contrast.

6.
Ecol Psychol ; 28(4): 216-261, 2016 Oct 01.
Article in English | MEDLINE | ID: mdl-28367052

ABSTRACT

To become language users, infants must embrace the integrality of speech perception and production. That they do so, and quite rapidly, is implied by the native-language attunement they achieve in each domain by 6-12 months. Yet research has most often addressed one or the other domain, rarely how they interrelate. Moreover, mainstream assumptions that perception relies on acoustic patterns whereas production involves motor patterns entail that the infant would have to translate incommensurable information to grasp the perception-production relationship. We posit the more parsimonious view that both domains depend on commensurate articulatory information. Our proposed framework combines principles of the Perceptual Assimilation Model (PAM) and Articulatory Phonology (AP). According to PAM, infants attune to articulatory information in native speech and detect similarities of nonnative phones to native articulatory patterns. The AP premise that gestures of the speech organs are the basic elements of phonology offers articulatory similarity metrics while satisfying the requirement that phonological information be discrete and contrastive: (a) distinct articulatory organs produce vocal tract constrictions and (b) phonological contrasts recruit different articulators and/or constrictions of a given articulator that differ in degree or location. Various lines of research suggest young children perceive articulatory information, which guides their productions: discrimination of between- versus within-organ contrasts, simulations of attunement to language-specific articulatory distributions, multimodal speech perception, oral/vocal imitation, and perceptual effects of articulator activation or suppression. We conclude that articulatory gesture information serves as the foundation for developmental integrality of speech perception and production.

7.
Phonetica ; 71(1): 4-21, 2014.
Article in English | MEDLINE | ID: mdl-24923313

ABSTRACT

Research on language-specific tuning in speech perception has focused mainly on consonants, while that on non-native vowel perception has failed to address whether the same principles apply. Therefore, non-native vowel perception was investigated here in light of relevant theoretical models: the Perceptual Assimilation Model (PAM) and the Natural Referent Vowel (NRV) framework. American-English speakers completed discrimination and native language assimilation (categorization and goodness rating) tests on six nonnative vowel contrasts. Discrimination was consistent with PAM assimilation types, but asymmetries predicted by NRV were only observed for single-category assimilations, suggesting that perceptual assimilation might modulate the effects of vowel peripherality on non-native vowel perception.


Subject(s)
Language , Phonetics , Speech Perception/physiology , Verbal Learning , Female , Humans , Male , Multilingualism , Sensitivity and Specificity , Young Adult
8.
PLoS One ; 9(1): e83546, 2014.
Article in English | MEDLINE | ID: mdl-24421892

ABSTRACT

Past research has shown that English learners begin segmenting words from speech by 7.5 months of age. However, more recent research has begun to show that, in some situations, infants may exhibit rudimentary segmentation capabilities at an earlier age. Here, we report on four perceptual experiments and a corpus analysis further investigating the initial emergence of segmentation capabilities. In Experiments 1 and 2, 6-month-olds were familiarized with passages containing target words located either utterance medially or at utterance edges. Only those infants familiarized with passages containing target words aligned with utterance edges exhibited evidence of segmentation. In Experiments 3 and 4, 6-month-olds recognized familiarized words when they were presented in a new acoustically distinct voice (male rather than female), but not when they were presented in a phonologically altered manner (missing the initial segment). Finally, we report corpus analyses examining how often different word types occur at utterance boundaries in different registers. Our findings suggest that edge-aligned words likely play a key role in infants' early segmentation attempts, and also converge with recent reports suggesting that 6-month-olds' have already started building a rudimentary lexicon.


Subject(s)
Language Development , Semantics , Speech , Female , Humans , Infant , Male , Speech Acoustics
9.
Dev Psychobiol ; 56(2): 210-27, 2014 Feb.
Article in English | MEDLINE | ID: mdl-24390820

ABSTRACT

The perceptual assimilation model (PAM; Best, C. T. [1995]. A direct realist view of cross-language speech perception. In W. Strange (Ed.), Speech perception and linguistic experience: Issues in cross-language research (pp. 171-204). Baltimore, MD: York Press.) accounts for developmental patterns of speech contrast discrimination by proposing that infants shift from untuned phonetic perception at 6 months to natively tuned perceptual assimilation at 11-12 months, but the model does not predict initial discrimination differences among contrasts. To address that issue, we evaluated the Articulatory Organ Hypothesis, which posits that consonants produced using different articulatory organs are initially easier to discriminate than those produced with the same articulatory organ. We tested English-learning 6- and 11-month-olds' discrimination of voiceless fricative place contrasts from Nuu-Chah-Nulth (non-native) and English (native), with one within-organ and one between-organ contrast from each language. Both native and non-native contrasts were discriminated across age, suggesting that articulatory-organ differences do not influence perception of speech contrasts by young infants. The results highlight the fact that a decline in discrimination for non-native contrasts does not always occur over age.


Subject(s)
Child Development/physiology , Discrimination, Psychological/physiology , Language Development , Speech Perception/physiology , Discrimination Learning/physiology , Female , Humans , Infant , Male
10.
J Acoust Soc Am ; 133(4): 2397-411, 2013 Apr.
Article in English | MEDLINE | ID: mdl-23556605

ABSTRACT

Monolingual listeners are constrained by native language experience when categorizing and discriminating unfamiliar non-native contrasts. Are early bilinguals constrained in the same way by their two languages, or do they possess an advantage? Greek-English bilinguals in either Greek or English language mode were compared to monolinguals on categorization and discrimination of Ma'di stop-voicing distinctions that are non-native to both languages. As predicted, English monolinguals categorized Ma'di prevoiced plosive and implosive stops and the coronal voiceless stop as English voiced stops. The Greek monolinguals categorized the Ma'di short-lag voiceless stops as Greek voiceless stops, and the prevoiced implosive stops and the coronal prevoiced stop as Greek voiced stops. Ma'di prenasalized stops were uncategorized. Greek monolinguals discriminated the non-native voiced-voiceless contrasts very well, whereas the English monolinguals did poorly. Bilinguals were given all oral and written instructions either in English or in Greek (language mode manipulation). Each language mode subgroup categorized Ma'di stop-voicing comparably to the corresponding monolingual group. However, the bilinguals' discrimination was unaffected by language mode: both subgroups performed intermediate to the monolinguals for the prevoiced-voiceless contrast. Thus, bilinguals do not possess an advantage for unfamiliar non-native contrasts, but are nonetheless uniquely configured language users, differing from either monolingual group.


Subject(s)
Multilingualism , Phonetics , Speech Acoustics , Speech Perception , Acoustic Stimulation , Adult , Audiometry, Speech , Discrimination, Psychological , Female , Humans , Male , Sound Spectrography , Time Factors , Young Adult
11.
Child Dev ; 84(6): 2064-78, 2013.
Article in English | MEDLINE | ID: mdl-23521607

ABSTRACT

By 12 months, children grasp that a phonetic change to a word can change its identity (phonological distinctiveness). However, they must also grasp that some phonetic changes do not (phonological constancy). To test development of phonological constancy, sixteen 15-month-olds and sixteen 19-month-olds completed an eye-tracking task that tracked their gaze to named versus unnamed images for familiar words spoken in their native (Australian) and an unfamiliar non-native (Jamaican) regional accent of English. Both groups looked longer at named than unnamed images for Australian pronunciations, but only 19-month-olds did so for Jamaican pronunciations, indicating that phonological constancy emerges by 19 months. Vocabulary size predicted 15-month-olds' identifications for the Jamaican pronunciations, suggesting vocabulary growth is a viable predictor for phonological constancy development.


Subject(s)
Language Development , Phonetics , Speech Perception/physiology , Vocabulary , Acoustic Stimulation , Female , Fixation, Ocular/physiology , Humans , Infant , Male , Perceptual Masking , Psychomotor Performance/physiology , Recognition, Psychology/physiology
12.
J Phon ; 40(4): 582-594, 2012 Jul 01.
Article in English | MEDLINE | ID: mdl-22844163

ABSTRACT

How listeners categorize two phones predicts the success with which they will discriminate the given phonetic distinction. In the case of bilinguals, such perceptual patterns could reveal whether the listener's two phonological systems are integrated or separate. This is of particular interest when a given contrast is realized differently in each language, as is the case with Greek and English stop-voicing distinctions. We had Greek-English early sequential bilinguals and Greek and English monolinguals (baselines) categorize, rate, and discriminate stop-voicing contrasts in each language. All communication with each group of bilinguals occurred solely in one language mode, Greek or English. The monolingual groups showed the expected native-language constraints, each perceiving their native contrast more accurately than the opposing nonnative contrast. Bilinguals' category-goodness ratings for the same physical stimuli differed, consistent with their language mode, yet their discrimination performance was unaffected by language mode and biased toward their dominant language (English). We conclude that bilinguals integrate both languages in a common phonetic space that is swayed by their long-term dominant language environment for discrimination, but that they selectively attend to language-specific phonetic information for phonologically motivated judgments (category-goodness ratings).

13.
J Phon ; 39(4): 558-570, 2011 Oct.
Article in English | MEDLINE | ID: mdl-22787285

ABSTRACT

Speech production research has demonstrated that the first language (L1) often interferes with production in bilinguals' second language (L2), but it has been suggested that bilinguals who are L2-dominant are the most likely to suppress this L1-interference. While prolonged contextual changes in bilinguals' language use (e.g., stays overseas) are known to result in L1 and L2 phonetic shifts, code-switching provides the unique opportunity of observing the immediate phonetic effects of L1-L2 interaction. We measured the voice onset times (VOTs) of Greek-English bilinguals' productions of /b, d, p, t/ in initial and medial contexts, first in either a Greek or English unilingual mode, and in a later session when they produced the same target pseudowords as a code-switch from the opposing language. Compared to a unilingual mode, all English stops produced as code-switches from Greek, regardless of context, had more Greek-like VOTs. In contrast, Greek stops showed no shift toward English VOTs, with the exception of medial voiced stops. Under the specifically interlanguage condition of code-switching we have demonstrated a pervasive influence of the L1 even in L2-dominant individuals.

14.
Dev Sci ; 13(2): 339-45, 2010 Mar.
Article in English | MEDLINE | ID: mdl-20136930

ABSTRACT

Past research has demonstrated that infants can rapidly extract syllable distribution information from an artificial language and use this knowledge to infer likely word boundaries in speech. However, artificial languages are extremely simplified with respect to natural language. In this study, we ask whether infants' ability to track transitional probabilities between syllables in an artificial language can scale up to the challenge of natural language. We do so by testing both 5.5- and 8-month-olds' ability to segment an artificial language containing four words of uniform length (all CVCV) or four words of varying length (two CVCV, two CVCVCV). The transitional probability cues to word boundaries were held equal across the two languages. Both age groups segmented the language containing words of uniform length, demonstrating that even 5.5-month-olds are extremely sensitive to the conditional probabilities in their environment. However, neither age group succeeded in segmenting the language containing words of varying length, despite the fact that the transitional probability cues defining word boundaries were equally strong in the two languages. We conclude that infants' statistical learning abilities may not be as robust as earlier studies have suggested.


Subject(s)
Language Development , Language , Learning , Phonetics , Speech Perception , Female , Humans , Infant , Male , Netherlands , Recognition, Psychology
15.
J Phon ; 38(4): 640-653, 2010 Oct.
Article in English | MEDLINE | ID: mdl-21743759

ABSTRACT

The way that bilinguals produce phones in each of their languages provides a window into the nature of the bilingual phonological space. For stop consonants, if early sequential bilinguals, whose languages differ in voice onset time (VOT) distinctions, produce native-like VOTs in each of their languages, it would imply that they have developed separate first and second language phones, that is, language-specific phonetic realisations for stop-voicing distinctions. Given the ambiguous phonological status of Greek voiced stops, which has been debated but not investigated experimentally, Greek-English bilinguals can offer a unique perspective on this issue. We first recorded the speech of Greek and Australian-English monolinguals to observe native VOTs in each language for /p, t, b, d/ in word-initial and word-medial (post-vocalic and post-nasal) positions. We then recorded fluent, early Greek-Australian-English bilinguals in either a Greek or English language context; all communication occurred in only one language. The bilinguals in the Greek context were indistinguishable from the Greek monolinguals, whereas the bilinguals in the English context matched the VOTs of the Australian-English monolinguals in initial position, but showed some modest differences from them in the phonetically more complex medial positions. We interpret these results as evidence that bilingual speakers possess phonetic categories for voiced versus voiceless stops that are specific to each language, but are influenced by positional context differently in their second than in their first language.

16.
J Acoust Soc Am ; 126(1): 367-76, 2009 Jul.
Article in English | MEDLINE | ID: mdl-19603893

ABSTRACT

Two artificial-language learning experiments directly compared English, French, and Dutch listeners' use of suprasegmental cues for continuous-speech segmentation. In both experiments, listeners heard unbroken sequences of consonant-vowel syllables, composed of recurring three- and four-syllable "words." These words were demarcated by (a) no cue other than transitional probabilities induced by their recurrence, (b) a consistent left-edge cue, or (c) a consistent right-edge cue. Experiment 1 examined a vowel lengthening cue. All three listener groups benefited from this cue in right-edge position; none benefited from it in left-edge position. Experiment 2 examined a pitch-movement cue. English listeners used this cue in left-edge position, French listeners used it in right-edge position, and Dutch listeners used it in both positions. These findings are interpreted as evidence of both language-universal and language-specific effects. Final lengthening is a language-universal effect expressing a more general (non-linguistic) mechanism. Pitch movement expresses prominence which has characteristically different placements across languages: typically at right edges in French, but at left edges in English and Dutch. Finally, stress realization in English versus Dutch encourages greater attention to suprasegmental variation by Dutch than by English listeners, allowing Dutch listeners to benefit from an informative pitch-movement cue even in an uncharacteristic position.


Subject(s)
Cues , Language , Speech Perception , Speech , Female , Humans , Male , Phonetics , Probability , Speech Acoustics , Young Adult
17.
Psychol Sci ; 20(5): 539-42, 2009 May.
Article in English | MEDLINE | ID: mdl-19368700

ABSTRACT

Efficient word recognition depends on detecting critical phonetic differences among similar-sounding words, or sensitivity to phonological distinctiveness, an ability evident at 19 months of age but unreliable at 14 to 15 months of age. However, little is known about phonological constancy, the equally crucial ability to recognize a word's identity across natural phonetic variations, such as those in cross-dialect pronunciation differences. We show that 15- and 19-month-old children recognize familiar words spoken in their native dialect, but that only the older children recognize familiar words in a dissimilar nonnative dialect, providing evidence for emergence of phonological constancy by 19 months. These results are compatible with a perceptual-attunement account of developmental change in early word recognition, but not with statistical-learning or phonological accounts. Thus, the complementary skills of phonological constancy and distinctiveness both appear at around 19 months of age, together providing the child with a fundamental insight that permits rapid vocabulary growth and later reading acquisition.


Subject(s)
Language Development , Language , Phonetics , Speech Perception , Attention , Female , Fixation, Ocular , Humans , Infant , Male , Pattern Recognition, Visual , Reaction Time , Vocabulary
18.
Q J Exp Psychol (Hove) ; 59(11): 2010-31, 2006 Nov.
Article in English | MEDLINE | ID: mdl-16987786

ABSTRACT

The aim of this paper is to provide further evidence that orthography plays a central role in phonemic awareness, by demonstrating an orthographic congruency effect in phoneme deletion. In four initial phoneme deletion experiments, adult participants produced the correct response more slowly with orthographically mismatched stimulus-response pairs (e.g., worth-earth) than with matched pairs (e.g., wage-age). This orthographic effect occurred with or without specific instructions to ignore spelling and when stimuli were presented with or without the to-be-deleted sound. In a final experiment, participants made more errors on complex than simple onset items, but there was no interaction with orthographic mismatch. The repeated observation of this robust orthographic effect suggests that participants are at least aware of orthography during phonemic awareness tasks, and it supports the view that phonemic awareness is directly subserved by orthography.


Subject(s)
Phonetics , Reaction Time/physiology , Adolescent , Adult , Analysis of Variance , Awareness/physiology , Cognition/physiology , Female , Humans , Language , Male , Middle Aged , Reading , Students/psychology
19.
Behav Res Methods ; 37(1): 139-47, 2005 Feb.
Article in English | MEDLINE | ID: mdl-16097354

ABSTRACT

Many researchers rely on analogue voice keys for psycholinguistic research. However, the triggering of traditional simple threshold voice keys (STVKs) is delayed after response onset, and the delay duration may vary depending on initial phoneme type. The delayed trigger voice key (DTVK), a stand-alone electronic device that incorporates an additional minimum signal duration parameter, is described and validated in two experiments. In Experiment 1, recorded responses from a nonword naming task were presented to the DTVK and an STVK. As compared with hand-coded reaction times from visual inspection of the waveform, the DTVK was more accurate than the STVK, overall and across initial phoneme type. Rastle and Davis (2002) showed that an STVK more accurately detected an initial [s] when it was followed by a vowel than when followed by a consonant. Participants' responses from that study were presented to the DTVK in Experiment 2, and accuracy was equivalent for initial [s] in vowel and consonant contexts. Details for the construction of the DTVK are provided.


Subject(s)
Psycholinguistics/instrumentation , Psychomotor Performance , Reaction Time , Signal Processing, Computer-Assisted/instrumentation , Speech Recognition Software/statistics & numerical data , Equipment Design , Humans , Phonetics , Psycholinguistics/statistics & numerical data , Sound Spectrography
20.
J Exp Psychol Gen ; 133(4): 573-83, 2004 Dec.
Article in English | MEDLINE | ID: mdl-15584807

ABSTRACT

Is it possible to learn the relation between 2 nonadjacent events? M. Pena, L. L. Bonatti, M. Nespor, and J. Mehler (2002) claimed this to be possible, but only in conditions suggesting the involvement of algebraic-like computations. The present article reports simulation studies and experimental data showing that the observations on which Pena et al. grounded their reasoning were flawed by deep methodological inadequacies. When the invalid data are set aside, the available evidence fits exactly with the predictions of a theory relying on ubiquitous associative mechanisms. Because nonadjacent dependencies are frequent in natural language, this reappraisal has far-reaching implications for the current debate on the need for rule-based computations in human adaptation to complex structures.


Subject(s)
Learning , Mathematics , Association , Cognition , Humans , Phonetics , Problem Solving , Psychological Theory
SELECTION OF CITATIONS
SEARCH DETAIL
...