Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 17 de 17
Filter
Add more filters










Publication year range
1.
Dev Psychol ; 2024 Jul 08.
Article in English | MEDLINE | ID: mdl-38976437

ABSTRACT

Infants' preference for vowel harmony (VH, a phonotactic constraint that requires vowels in a word to be featurally similar) is thought to be language-specific: Monolingual infants learning VH languages show a listening preference for VH patterns by 6 months of age, while those learning non-VH languages do not (Gonzalez-Gomez et al., 2019; Van Kampen et al., 2008). We investigated sensitivity to advanced tongue root (ATR) harmony in Akan (Kwa, Niger-Congo) in 40 six-month-old multilingual infants (21 girls) in Ghana, West Africa (an understudied population), all learning Akan, Ghanaian English, and most of them several other understudied African languages (e.g., Ga, Ewe). We hypothesized that infants learning both ATR harmony and nonharmony languages would demonstrate sensitivity to ATR harmony. Using the central fixation procedure, infants were presented with disyllabic nonwords that were either harmonic (e.g., puti) or nonharmonic (e.g., petɔ) based on their ATR features. Infants demonstrated sensitivity to ATR harmony with a familiarity preference, listening longer to harmonic syllable sequences than nonharmonic ones. The relative amount of exposure to (an) ATR harmony language(s) did not modulate the preference. These results shed light on our understanding of early multilingualism: they suggest that early sensitivity to VH in multilinguals may be similar to monolingual infants learning other types of VH, irrespective of simultaneous experience with non-VH languages. We conclude with reflections on studying infant language acquisition in multilingual Africa. (PsycInfo Database Record (c) 2024 APA, all rights reserved).

2.
Infant Behav Dev ; 72: 101860, 2023 08.
Article in English | MEDLINE | ID: mdl-37478500

ABSTRACT

PURPOSE: Mother-infant interactions during the first year of life are crucial to healthy infant development. The infant-directed speech (IDS), and specifically pitch contours, used by mothers during interactions are associated with infant language and social development. However, little research has examined pitch contours towards infants with socio-communication and language differences, such as those displaying early signs of autism spectrum disorder (autism). This study aimed to explore the association of infant autism signs and pitch contours used by mothers with their 12-month-old infants. METHOD: Mother-infant dyads (n = 109) were recruited from the University of Newcastle BabyLab. Parent-infant dyads completed a 15-min interaction, from which a total of 36,128 pitch contours were measured and correlated with infant autism signs. Infant autism signs were assessed via parent-report (First Year Inventory; Reznick et al., 2007). A subset of high-risk infants (admitted to a neonatal intensive care unit, n = 29) also received an observation-based assessment (Autism Detection in Early Childhood; Young & Nah, 2016). RESULTS: Mothers used fewer sinusoidal contours when they rated their infant as displaying more autism signs (rs = - .30, p = .004) and more autism-related sensory regulation issues (rs = - .31, p = .001). Mothers used fewer flat contours if their infant displayed more researcher-rated autism signs (r2 = - .39, p = .04). CONCLUSIONS: This study provides the early evidence that maternal pitch contours in IDS are related to early autism signs in infancy. If our findings are replicated in follow up studies where infants are followed to diagnosis, maternal IDS may be an important element of future early intervention protocols that focus on communication for infants with risk for autism.


Subject(s)
Autism Spectrum Disorder , Autistic Disorder , Infant, Newborn , Female , Child , Infant , Humans , Child, Preschool , Speech , Autistic Disorder/diagnosis , Autism Spectrum Disorder/diagnosis , Mothers , Mother-Child Relations
3.
J Child Lang ; : 1-5, 2023 Mar 15.
Article in English | MEDLINE | ID: mdl-36919648

ABSTRACT

There is a large consensus (e.g., Cristia, Foushee, Aravena-Bravo, Cychosz, Scaff & Casillas, 2022; Kidd & Garcia, 2022) that diversification in language acquisition research is needed. Cristia et al. (2022) convincingly argue for studying language acquisition in rural populations and recommend combining observational and experimental approaches in doing so. In this commentary, we identify that diversification efforts must also include children growing up in non-western urban societies and that combining experiments with more easy-to-obtain data on language exposure can be a solid method to start with.

4.
J Acoust Soc Am ; 152(4): 2106, 2022 10.
Article in English | MEDLINE | ID: mdl-36319239

ABSTRACT

Lateral vocalisation is assumed to arise from changes in coronal articulation but is typically characterised perceptually without linking the vocalised percept to a coronal articulation. Therefore, we examined how listeners' perception of coda /l/ as vocalised relates to coronal closure. Perceptual stimuli were acquired by recording laterals produced by six speakers of Australian English using electromagnetic articulography (EMA). Tongue tip closure was monitored for each lateral in the EMA data. Increased incidence of incomplete coronal closure was found in coda /l/ relative to onset /l/. Having verified that the dataset included /l/ tokens produced with incomplete coronal closure-a primary articulatory cue of vocalised /l/-we conducted a perception study in which four highly experienced auditors rated each coda /l/ token from vocalised (3) to non-vocalised (0). An ordinal mixed model showed that increased tongue tip (TT) aperture and delay correlated with vocalised percept, but auditors ratings were characterised by a lack of inter-rater reliability. While the correlation between increased TT aperture, delay, and vocalised percept shows that there is some reliability in auditory classification, variation between auditors suggests that listeners may be sensitive to different sets of cues associated with lateral vocalisation that are not yet entirely understood.


Subject(s)
Language , Speech Perception , Reproducibility of Results , Australia , Tongue , Perception , Phonetics , Speech Acoustics
5.
Front Psychol ; 12: 680882, 2021.
Article in English | MEDLINE | ID: mdl-34552527

ABSTRACT

Rhyme perception is an important predictor for future literacy. Assessing rhyme abilities, however, commonly requires children to make explicit rhyme judgements on single words. Here we explored whether infants already implicitly process rhymes in natural rhyming contexts (child songs) and whether this response correlates with later vocabulary size. In a passive listening ERP study, 10.5 month-old Dutch infants were exposed to rhyming and non-rhyming child songs. Two types of rhyme effects were analysed: (1) ERPs elicited by the first rhyme occurring in each song (rhyme sensitivity) and (2) ERPs elicited by rhymes repeating after the first rhyme in each song (rhyme repetition). Only for the latter a tentative negativity for rhymes from 0 to 200 ms after the onset of the rhyme word was found. This rhyme repetition effect correlated with productive vocabulary at 18 months-old, but not with any other vocabulary measure (perception at 10.5 or 18 months-old). While awaiting future replication, the study indicates precursors of phonological awareness already during infancy and with ecologically valid linguistic stimuli.

6.
J Acoust Soc Am ; 149(2): 1183, 2021 02.
Article in English | MEDLINE | ID: mdl-33639793

ABSTRACT

Vowel contrasts may be reduced or neutralized before coda laterals in English [Bernard (1985). The Cultivated Australian: Festschrift in Honour of Arthur Delbridge, pp. 319-332; Labov, Ash, and Boberg (2008). The Atlas of North American English, Phonetics and Sound Change (Gruyter Mouton, Berlin); Palethorpe and Cox (2003). International Seminar on Speech Production (Macquaire University, Sydney, Australia)], but the acoustic characteristics of vowel-lateral interaction in Australian English (AusE) rimes have not been systematically examined. Spectral and temporal properties of 16 pre-lateral and 16 pre-obstruent vowels produced by 29 speakers of AusE were compared. Acoustic vowel similarity in both environments was captured using random forest classification and hierarchical cluster analysis of the first three DCT coefficients of F1, F2, and F3, and duration values. Vowels preceding /l/ codas showed overall increased confusability compared to vowels preceding /d/ codas. In particular, reduced spectral contrast was found for the rime pairs /iːl-ɪl/ (feel-fill), /ʉːl-ʊl/ (fool-full), /əʉl-ɔl/ (dole-doll), and /æɔl-æl/ (howl-Hal). Potential articulatory explanations and implications for sound change are discussed.


Subject(s)
Phonetics , Speech Acoustics , Australia , Humans , Language , Speech Production Measurement
7.
Infancy ; 25(5): 699-718, 2020 09.
Article in English | MEDLINE | ID: mdl-32794372

ABSTRACT

Infants exploit acoustic boundaries to perceptually organize phrases in speech. This prosodic parsing ability is well-attested and is a cornerstone to the development of speech perception and grammar. However, infants also receive linguistic input in child songs. This study provides evidence that infants parse songs into meaningful phrasal units and replicates previous research for speech. Six-month-old Dutch infants (n = 80) were tested in the song or speech modality in the head-turn preference procedure. First, infants were familiarized to two versions of the same word sequence: One version represented a well-formed unit, and the other contained a phrase boundary halfway through. At test, infants were presented two passages, each containing one version of the familiarized sequence. The results for speech replicated the previously observed preference for the passage containing the well-formed sequence, but only in a more fine-grained analysis. The preference for well-formed phrases was also observed in the song modality, indicating that infants recognize phrase structure in song. There were acoustic differences between stimuli of the current and previous studies, suggesting that infants are flexible in their processing of boundary cues while also providing a possible explanation for differences in effect sizes.


Subject(s)
Child Development/physiology , Choice Behavior/physiology , Infant Behavior/physiology , Recognition, Psychology/physiology , Singing , Speech Perception/physiology , Female , Humans , Infant , Male
8.
Infant Behav Dev ; 60: 101448, 2020 08.
Article in English | MEDLINE | ID: mdl-32593957

ABSTRACT

This paper compared three different procedures common in infant speech perception research: a headturn preference procedure (HPP) and a central-fixation (CF) procedure with either automated eye-tracking (CF-ET) or manual coding (CF-M). In theory, such procedures all measure the same underlying speech perception and learning mechanisms and the choice between them should ideally be irrelevant in unveiling infant preference. However, the ManyBabies study (ManyBabies Consortium, 2019), a cross-laboratory collaboration on infants' preference for child-directed speech, revealed that choice of procedure can modulate effect sizes. Here we examined whether procedure also modulates preference in paradigms that add a learning phase prior to test: a speech segmentation paradigm. Such paradigms are particularly important for studying the learning mechanisms infants can employ for language acquisition. We carried out the same familiarization-then-test experiment with the three different procedures (32 unique infants per procedure). Procedures were compared on various factors, such as overall effect, average looking time and drop-out rate. The key observations are that the HPP yielded a larger familiarity preference, but also reported larger drop-out rates. This raises questions about the generalizability of results. We argue that more collaborative research into different procedures in infant preference experiments is required in order to interpret the variation in infant preferences more accurately.


Subject(s)
Infant Behavior/physiology , Language Development , Speech Perception/physiology , Speech/physiology , Acoustic Stimulation/methods , Female , Humans , Infant , Infant Behavior/psychology , Learning/physiology , Male , Photic Stimulation/methods , Random Allocation
9.
Brain Sci ; 10(1)2020 Jan 09.
Article in English | MEDLINE | ID: mdl-31936586

ABSTRACT

Children's songs are omnipresent and highly attractive stimuli in infants' input. Previous work suggests that infants process linguistic-phonetic information from simplified sung melodies. The present study investigated whether infants learn words from ecologically valid children's songs. Testing 40 Dutch-learning 10-month-olds in a familiarization-then-test electroencephalography (EEG) paradigm, this study asked whether infants can segment repeated target words embedded in songs during familiarization and subsequently recognize those words in continuous speech in the test phase. To replicate previous speech work and compare segmentation across modalities, infants participated in both song and speech sessions. Results showed a positive event-related potential (ERP) familiarity effect to the final compared to the first target occurrences during both song and speech familiarization. No evidence was found for word recognition in the test phase following either song or speech. Comparisons across the stimuli of the present and a comparable previous study suggested that acoustic prominence and speech rate may have contributed to the polarity of the ERP familiarity effect and its absence in the test phase. Overall, the present study provides evidence that 10-month-old infants can segment words embedded in songs, and it raises questions about the acoustic and other factors that enable or hinder infant word segmentation from songs and speech.

10.
Infant Behav Dev ; 52: 130-139, 2018 08.
Article in English | MEDLINE | ID: mdl-30086413

ABSTRACT

Children's songs often contain rhyming words at phrase endings. In this study, we investigated whether infants can already recognize this phonological pattern in songs. Earlier studies using lists of spoken words were equivocal on infants' spontaneous processing of rhymes (Hayes et al., 2000; Jusczyk et al., 1999). Songs, however, constitute an ecologically valid rhyming stimulus, which could allow for spontaneous processing of this phonological pattern in infants. Novel children's songs with rhyming and non-rhyming lyrics using pseudo-words were presented to 35 9-month-old Dutch infants using the Headturn Preference Procedure. Infants on average listened longer to the non-rhyming songs, with around half of the infants however exhibiting a preference for the rhyming songs. These results highlight that infants have the processing abilities to benefit from their natural rhyming input for the development of their phonological abilities.


Subject(s)
Auditory Perception/physiology , Music , Phonetics , Female , Germany , Humans , Infant , Male
11.
Second Lang Res ; 33(4): 483-518, 2017 Oct.
Article in English | MEDLINE | ID: mdl-29081568

ABSTRACT

Speech of late bilinguals has frequently been described in terms of cross-linguistic influence (CLI) from the native language (L1) to the second language (L2), but CLI from the L2 to the L1 has received relatively little attention. This article addresses L2 attainment and L1 attrition in voicing systems through measures of voice onset time (VOT) in two groups of Dutch-German late bilinguals in the Netherlands. One group comprises native speakers of Dutch and the other group comprises native speakers of German, and the two groups further differ in their degree of L2 immersion. The L1-German-L2-Dutch bilinguals (N = 23) are exposed to their L2 at home and outside the home, and the L1-Dutch-L2-German bilinguals (N = 18) are only exposed to their L2 at home. We tested L2 attainment by comparing the bilinguals' L2 to the other bilinguals' L1, and L1 attrition by comparing the bilinguals' L1 to Dutch monolinguals (N = 29) and German monolinguals (N = 27). Our findings indicate that complete L2 immersion may be advantageous in L2 acquisition, but at the same time it may cause L1 phonetic attrition. We discuss how the results match the predictions made by Flege's Speech Learning Model and explore how far bilinguals' success in acquiring L2 VOT and maintaining L1 VOT depends on the immersion context, articulatory constraints and the risk of sounding foreign accented.

12.
Front Psychol ; 6: 1341, 2015.
Article in English | MEDLINE | ID: mdl-26441719

ABSTRACT

Distributional learning of speech sounds is learning from simply being exposed to frequency distributions of speech sounds in one's surroundings. In laboratory settings, the mechanism has been reported to be discernible already after a few minutes of exposure, in both infants and adults. These "effects of distributional training" have traditionally been attributed to the difference in the number of peaks between the experimental distribution (two peaks) and the control distribution (one or zero peaks). However, none of the earlier studies fully excluded a possibly confounding effect of the dispersion in the distributions. Additionally, some studies with a non-speech control condition did not control for a possible difference between processing speech and non-speech. The current study presents an experiment that corrects both imperfections. Spanish listeners were exposed to either a bimodal distribution encompassing the Dutch contrast /ɑ/∼/a/ or a unimodal distribution with the same dispersion. Before and after training, their accuracy of categorization of [ɑ]- and [a]-tokens was measured. A traditionally calculated p-value showed no significant difference in categorization improvement between bimodally and unimodally trained participants. Because of this null result, a Bayesian method was used to assess the odds in favor of the null hypothesis. Four different Bayes factors, each calculated on a different belief in the truth value of previously found effect sizes, indicated the absence of a difference between bimodally and unimodally trained participants. The implication is that "effects of distributional training" observed in the lab are not induced by the number of peaks in the distributions.

13.
Front Psychol ; 6: 495, 2015.
Article in English | MEDLINE | ID: mdl-25964772

ABSTRACT

Adults achieve successful coordination during conversation by using prosodic and lexicosyntactic cues to predict upcoming changes in speakership. We examined the relative weight of these linguistic cues in the prediction of upcoming turn structure by toddlers learning Dutch (Experiment 1; N = 21) and British English (Experiment 2; N = 20) and adult control participants (Dutch: N = 16; English: N = 20). We tracked participants' anticipatory eye movements as they watched videos of dyadic puppet conversation. We controlled the prosodic and lexicosyntactic cues to turn completion for a subset of the utterances in each conversation to create four types of target utterances (fully incomplete, incomplete syntax, incomplete prosody, and fully complete). All participants (Dutch and English toddlers and adults) used both prosodic and lexicosyntactic cues to anticipate upcoming speaker changes, but weighed lexicosyntactic cues over prosodic ones when the two were pitted against each other. The results suggest that Dutch and English toddlers are already nearly adult-like in their use of prosodic and lexicosyntactic cues in anticipating upcoming turn transitions.

14.
Infant Behav Dev ; 36(4): 847-62, 2013 Dec.
Article in English | MEDLINE | ID: mdl-24239878

ABSTRACT

Exaggeration of the vowel space in infant-directed speech (IDS) is well documented for English, but not consistently replicated in other languages or for other speech-sound contrasts. A second attested, but less discussed, pattern of change in IDS is an overall rise of the formant frequencies, which may reflect an affective speaking style. The present study investigates longitudinally how Dutch mothers change their corner vowels, voiceless fricatives, and pitch when speaking to their infant at 11 and 15 months of age. In comparison to adult-directed speech (ADS), Dutch IDS has a smaller vowel space, higher second and third formant frequencies in the vowels, and a higher spectral frequency in the fricatives. The formants of the vowels and spectral frequency of the fricatives are raised more strongly for infants at 11 than at 15 months, while the pitch is more extreme in IDS to 15-month olds. These results show that enhanced positive affect is the main factor influencing Dutch mothers' realisation of speech sounds in IDS, especially to younger infants. This study provides evidence that mothers' expression of emotion in IDS can influence the realisation of speech sounds, and that the loss or gain of speech clarity may be secondary effects of affect.


Subject(s)
Child Language , Emotions/physiology , Mother-Child Relations , Mothers/psychology , Speech/physiology , Adult , Female , Happiness , Humans , Infant , Intention , Language , Language Development , Longitudinal Studies , Male , Phonetics , Speech Perception
15.
J Acoust Soc Am ; 131(4): 3079-87, 2012 Apr.
Article in English | MEDLINE | ID: mdl-22501081

ABSTRACT

In an investigation of contextual influences on sound categorization, 64 Peruvian Spanish listeners categorized vowels on an /i/ to /e/ continuum. First, to measure the influence of the stimulus range (broad acoustic context) and the preceding stimuli (local acoustic context), listeners were presented with different subsets of the Spanish /i/-/e/ continuum in separate blocks. Second, the influence of the number of response categories was measured by presenting half of the participants with /i/ and /e/ as responses, and the other half with /i/, /e/, /a/, /o/, and /u/. The results showed that the perceptual category boundary between /i/ and /e/ shifted depending on the stimulus range and that the formant values of locally preceding items had a contrastive influence. Categorization was less susceptible to broad and local acoustic context effects, however, when listeners were presented with five rather than two response options. Vowel categorization depends not only on the acoustic properties of the target stimulus, but also on its broad and local acoustic context. The influence of such context is in turn affected by the number of internal referents that are available to the listener in a task.


Subject(s)
Phonetics , Speech Acoustics , Speech Perception/physiology , Acoustic Stimulation/methods , Adolescent , Female , Humans , Language , Male , Peru , Young Adult
16.
Psychophysiology ; 49(5): 638-50, 2012 May.
Article in English | MEDLINE | ID: mdl-22335401

ABSTRACT

In behavioral tasks, previous research has found that advanced Spanish learners of Dutch rely on duration cues to distinguish Dutch vowels, while Dutch listeners rely on spectral cues. This study tested whether language-specific cue weighting is reflected in preattentive processing. The mismatch negativity (MMN) of Dutch and Spanish participants was examined in response to spectral and duration cues in Dutch vowels. The MMN at frontal and mid sites was weaker and peaked later at Fz for Spanish than for Dutch listeners for the spectrally cued contrasts, whereas both groups responded similarly to the duration cue. In line with overt categorization behavior, these MMN data indicate that preattentive cue weighting depends on the listeners' language experience.


Subject(s)
Cues , Evoked Potentials/physiology , Language , Speech Perception/physiology , Acoustic Stimulation , Adult , Attention/physiology , Data Interpretation, Statistical , Electroencephalography , Female , Humans , Male , Middle Aged , Netherlands , Psychomotor Performance/physiology , Spain , Young Adult
17.
J Acoust Soc Am ; 130(4): EL206-12, 2011 Oct.
Article in English | MEDLINE | ID: mdl-21974493

ABSTRACT

This study addresses the questions of whether listening to a bimodal distribution of vowels improves adult learners' categorization of a difficult L2 vowel contrast and whether enhancing the acoustic differences between the vowels in the distribution yields better categorization performance. Spanish learners of Dutch were trained on a natural bimodal or an enhanced bimodal distribution of the Dutch vowels /ɑ/ and /aː/, with the average productions of the vowels or more extreme values as the endpoints respectively. Categorization improved for learners who listened to the enhanced distribution, which suggests that adults profit from input with properties similar to infant-directed speech.


Subject(s)
Learning , Multilingualism , Phonetics , Speech Acoustics , Speech Intelligibility , Speech Perception , Acoustic Stimulation , Adult , Analysis of Variance , Audiometry, Speech , Discrimination, Psychological , Female , Humans , Male , Middle Aged , Psychoacoustics , Young Adult
SELECTION OF CITATIONS
SEARCH DETAIL
...