Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 23
Filter
Add more filters










Publication year range
1.
J Exp Child Psychol ; 239: 105826, 2024 03.
Article in English | MEDLINE | ID: mdl-38118379

ABSTRACT

Imitation that entails faithful reproduction of demonstrated behavior by reenacting a sequence of actions accurately is a fast and efficient way to acquire new skills as well as to conform to social norms. Previous studies reported that both culture and gender might impinge on young children's fidelity of imitation. We analyzed the imitative behavior of 87 children whose ages ranged from 3 to 6 years. An instrumental task was administered that offered partial (opaque apparatus) or total (transparent apparatus) information about causal connection between the demonstrated actions and their effect in achieving a desired reward. Imitative fidelity (imitating the actions that were demonstrated by an adult model yet were unnecessary for achieving the instrumental goal) increased as a function of age in boys, whereas no differences were found in girls. This lack of increase in girls can be ascribed to their displaying higher degrees of imitation fidelity at an earlier age.


Subject(s)
Imitative Behavior , Motivation , Male , Child , Female , Humans , Child, Preschool , Social Norms
2.
J Exp Psychol Gen ; 151(11): 2706-2719, 2022 Nov.
Article in English | MEDLINE | ID: mdl-35666891

ABSTRACT

Previous studies showed a bilingual advantage in metacognitive processing (tracking one's own cognitive performance) in linguistic tasks. However, bilinguals do not constitute a homogeneous population, and it was unclear which aspects of bilingualism affect metacognition. In this project, we tested the hypothesis that simultaneous acquisition and use of typologically different languages leads to development of diverse processing strategies and enhances metacognition. The hypothesis was tested in the visual and auditory modalities in language and nonlanguage domains, in an artificial language learning task. In the auditory modality, the hypothesis was confirmed for linguistic stimuli, with no between-domain transfer of metacognitive abilities was observed at the individual level. In the visual modality, no differences in metacognitive efficiency were observed. Moreover, we found that bilingualism per se and the use of typologically different languages modulated separate metacognitive processes engaged in monitoring cognitive performance in statistical learning task. (PsycInfo Database Record (c) 2022 APA, all rights reserved).


Subject(s)
Metacognition , Multilingualism , Humans , Language , Learning , Linguistics
3.
Ann N Y Acad Sci ; 1511(1): 191-209, 2022 05.
Article in English | MEDLINE | ID: mdl-35124815

ABSTRACT

In Basque-Spanish bilinguals, statistical learning (SL) in the visual modality was more efficient on nonlinguistic than linguistic input; in the auditory modality, we found the reverse pattern of results. We hypothesize that SL was shaped for processing nonlinguistic environmental stimuli and only later, as the language faculty emerged, recycled for speech processing. This led to further adaptive changes in the neurocognitive mechanisms underlying speech processing, including SL. By contrast, as a recent cultural innovation, written language has not yet led to adaptations. The current study investigated whether such phylogenetic influences on SL can be modulated by ontogenetic influences on a shorter timescale, over the course of individual development. We explored how SL is modulated by the ambient linguistic environment. We found that SL in the auditory modality can be further modulated by exposure to a bilingual environment, in which speakers need to process a wider range of diverse speech cues. This effect was observed only on linguistic, not nonlinguistic, material. We conclude that ontogenetic factors modulate the efficiency of already existing SL ability, honing it for specific types of input, by providing new targets for selection via exposure to different cues in the sensory input.


Subject(s)
Learning , Speech Perception , Humans , Language , Language Development , Phylogeny , Speech
4.
Eur J Neurosci ; 55(11-12): 3365-3372, 2022 06.
Article in English | MEDLINE | ID: mdl-33125787

ABSTRACT

Regular distribution of auditory stimuli over time can facilitate perception and attention. However, such effects have to date only been observed in separate studies using either linguistic or non-linguistic materials. This has made it difficult to compare the effects of rhythmic regularity on attention across domains. The current study was designed to provide an explicit within-subject comparison of reaction times and accuracy in an auditory target-detection task using sequences of regularly and irregularly distributed syllables (linguistic material) and environmental sounds (non-linguistic material). We explored how reaction times and accuracy were modulated by regular and irregular rhythms in a sound- (non-linguistic) and syllable-monitoring (linguistic) task performed by native Spanish speakers (N = 25). Surprisingly, we did not observe that regular rhythm exerted a facilitatory effect on reaction times or accuracy. Further exploratory analysis showed that targets that appear later in sequences of syllables and sounds are identified more quickly. In late targets, reaction times in stimuli with a regular rhythm were lower than in stimuli with irregular rhythm for linguistic material, but not for non-linguistic material. The difference in reaction times on stimuli with regular and irregular rhythm for late targets was also larger for linguistic than for non-linguistic material. This suggests a modulatory effect of rhythm on linguistic stimuli only once the percept of temporal isochrony has been established. We suggest that temporal isochrony modulates attention to linguistic more than to non-linguistic stimuli because the human auditory system is tuned to process speech. The results, however, need to be further tested in confirmatory studies.


Subject(s)
Auditory Perception , Language , Acoustic Stimulation/methods , Humans , Reaction Time , Speech
5.
Psychon Bull Rev ; 28(1): 333-340, 2021 Feb.
Article in English | MEDLINE | ID: mdl-32869190

ABSTRACT

Despite theoretical debate on the extent to which statistical learning is incidental or modulated by explicit instructions and conscious awareness of the content of statistical learning, no study has ever investigated the metacognition of statistical learning. We used an artificial language-learning paradigm and a segmentation task that required splitting a continuous stream of syllables into discrete recurrent constituents. During this task, statistical learning potentially produces knowledge of discrete constituents as well as about statistical regularities that are embodied in familiarization input. We measured metacognitive sensitivity and efficiency (using hierarchical Bayesian modelling to estimate metacognitive sensitivity and efficiency) to probe the role of conscious awareness in recognition of constituents extracted from the familiarization input and recognition of novel constituents embodying the same statistical regularities as these extracted constituents. Novel constituents are conceptualized to represent recognition of statistical structure rather than recognition of items retrieved from memory as whole constituents. We found that participants are equally sensitive to both types of learning products, yet subject them to varying degrees of conscious processing during the postfamiliarization recognition test. The data point to the contribution of conscious awareness to at least some types of statistical learning content.


Subject(s)
Metacognition/physiology , Probability Learning , Recognition, Psychology/physiology , Adult , Bayes Theorem , Female , Humans , Male , Young Adult
6.
Ann N Y Acad Sci ; 1486(1): 76-89, 2021 02.
Article in English | MEDLINE | ID: mdl-33020959

ABSTRACT

The cognitive mechanisms underlying statistical learning are engaged for the purposes of speech processing and language acquisition. However, these mechanisms are shared by a wide variety of species that do not possess the language faculty. Moreover, statistical learning operates across domains, including nonlinguistic material. Ancient mechanisms for segmenting continuous sensory input into discrete constituents have evolved for general-purpose segmentation of the environment and been readopted for processing linguistic input. Linguistic input provides a rich set of cues for the boundaries between sequential constituents. Such input engages a wider variety of more specialized mechanisms operating on these language-specific cues, thus potentially reducing the role of conditional statistics in tokenizing a continuous linguistic stream. We provide an explicit within-subject comparison of the utility of statistical learning in language versus nonlanguage domains across the visual and auditory modalities. The results showed that in the auditory modality statistical learning is more efficient with speech-like input, while in the visual modality efficiency is higher with nonlanguage input. We suggest that the speech faculty has been important for individual fitness for an extended period, leading to the adaptation of statistical learning mechanisms for speech processing. This is not the case in the visual modality, in which linguistic material presents a less ecological type of sensory input.


Subject(s)
Biological Evolution , Language Development , Language , Learning , Speech Perception/physiology , Visual Perception/physiology , Adolescent , Adult , Female , Humans , Male , Speech/physiology , Young Adult
7.
Ann N Y Acad Sci ; 1467(1): 60-76, 2020 05.
Article in English | MEDLINE | ID: mdl-31919870

ABSTRACT

Statistical learning is a set of cognitive mechanisms allowing for extracting regularities from the environment and segmenting continuous sensory input into discrete units. The current study used functional magnetic resonance imaging (fMRI) (N = 25) in conjunction with an artificial language learning paradigm to provide new insight into the neural mechanisms of statistical learning, considering both the online process of extracting statistical regularities and the subsequent offline recognition of learned patterns. Notably, prior fMRI studies on statistical learning have not contrasted neural activation during the learning and recognition experimental phases. Here, we found that learning is supported by the superior temporal gyrus and the anterior cingulate gyrus, while subsequent recognition relied on the left inferior frontal gyrus. Besides, prior studies only assessed the brain response during the recognition of trained words relative to novel nonwords. Hence, a further key goal of this study was to understand how the brain supports recognition of discrete constituents from the continuous input versus recognition of mere statistical structure that is used to build new constituents that are statistically congruent with the ones from the input. Behaviorally, recognition performance indicated that statistically congruent novel tokens were less likely to be endorsed as parts of the familiar environment than discrete constituents. fMRI data showed that the left intraparietal sulcus and angular gyrus support the recognition of old discrete constituents relative to novel statistically congruent items, likely reflecting an additional contribution from memory representations for trained items.


Subject(s)
Brain/physiology , Learning/physiology , Recognition, Psychology/physiology , Adult , Brain/diagnostic imaging , Brain Mapping , Female , Functional Neuroimaging , Humans , Magnetic Resonance Imaging , Male , Young Adult
8.
J Exp Psychol Learn Mem Cogn ; 46(3): 529-538, 2020 Mar.
Article in English | MEDLINE | ID: mdl-31282726

ABSTRACT

We assessed the effect of bilingualism on metacognitive processing in the artificial language learning task, in 2 experiments varying in the difficulty to segment the language. Following a study phase in which participants were exposed to the artificial language, segmentation performance was assessed by means of a dual forced-choice recognition test followed by confidence judgments. We used a signal detection approach to estimate type 1 performance (i.e., the participants' ability to discriminate statistical words vs. foils constructed from the same syllables) and type 2 metacognitive performance (i.e., the ability to discriminate the correctness of the type 1 decisions by confidence ratings). The results showed that bilinguals and monolinguals do not differ in type 1 recognition performance, but across the 2 experiments, metacognitive performance was higher in bilinguals compared with monolinguals. The results show that bilingualism improves metacognitive evaluation of performance in linguistic domains. We suggest that the improvement in metacognitive performance stems from bilinguals' enhanced error-monitoring abilities in language domain, which is also modulated by individual experience. (PsycINFO Database Record (c) 2020 APA, all rights reserved).


Subject(s)
Metacognition/physiology , Multilingualism , Probability Learning , Psycholinguistics , Signal Detection, Psychological/physiology , Adult , Humans , Young Adult
9.
Lang Speech ; 63(2): 242-263, 2020 Jun.
Article in English | MEDLINE | ID: mdl-30864487

ABSTRACT

We tested the hypothesis that languages can be classified by their degree of tonal rhythm (Jun, 2014). The tonal rhythms of English and Italian were quantified using the following parameters: (a) regularity of tonal alternations in time, measured as durational variability in peak-to-peak and valley-to-valley intervals; (b) magnitude of F0 excursions, measured as the range of frequencies covered by the speaker between consecutive F0 maxima and minima; (c) number of tonal target points per intonational unit; and (d) similarity of F0 rising and falling contours within intonational units. The results show that, as predicted by Jun's prosodic typology (2014), Italian has a stronger tonal rhythm than English, expressed by higher regularity in the distribution of F0 minima turning points, larger F0 excursions, and more frequent tonal targets, indicating alternating phonological H and L tones. This cross-language difference can be explained by the relative load of F0 and durational ratios on the perception and production of speech rhythm and prominence. We suggest that research on the role of speech rhythm in speech processing and language acquisition should not be restricted to syllabic rhythm, but should also examine the role of cross-language differences in tonal rhythm.


Subject(s)
Language Development , Language , Linguistics , Periodicity , Speech , Adult , Female , Humans , Timbre Perception , Verbal Learning , Young Adult
10.
Eur J Neurosci ; 51(9): 2008-2022, 2020 05.
Article in English | MEDLINE | ID: mdl-31872926

ABSTRACT

A continuous stream of syllables is segmented into discrete constituents based on the transitional probabilities (TPs) between adjacent syllables by means of statistical learning. However, we still do not know whether people attend to high TPs between frequently co-occurring syllables and cluster them together as parts of the discrete constituents or attend to low TPs aligned with the edges between the constituents and extract them as whole units. Earlier studies on TP-based segmentation also have not distinguished between the segmentation process (how people segment continuous speech) and the learning product (what is learnt by means of statistical learning mechanisms). In the current study, we explored the learning outcome separately from the learning process, focusing on three possible learning products: holistic constituents that are retrieved from memory during the recognition test, clusters of frequently co-occurring syllables, or a set of statistical regularities which can be used to reconstruct legitimate candidates for discrete constituents during the recognition test. Our data suggest that people employ boundary-finding mechanisms during online segmentation by attending to low inter-syllabic TPs during familiarization and also identify potential candidates for discrete constituents based on their statistical congruency with rules extracted during the learning process. Memory representations of recurrent constituents embedded in the continuous speech stream during familiarization facilitate subsequent recognition of these discrete constituents.


Subject(s)
Education, Distance , Speech Perception , Humans , Learning , Recognition, Psychology , Speech
11.
Evol Psychol ; 17(3): 1474704919879335, 2019.
Article in English | MEDLINE | ID: mdl-31564124

ABSTRACT

Patterns of nonverbal and verbal behavior of interlocutors become more similar as communication progresses. Rhythm entrainment promotes prosocial behavior and signals social bonding and cooperation. Yet, it is unknown if the convergence of rhythm in human speech is perceived and is used to make pragmatic inferences regarding the cooperative urge of the interactors. We conducted two experiments to answer this question. For analytical purposes, we separate pulse (recurring acoustic events) and meter (hierarchical structuring of pulses based on their relative salience). We asked the listeners to make judgments on the hostile or collaborative attitude of interacting agents who exhibit different or similar pulse (Experiment 1) or meter (Experiment 2). The results suggest that rhythm convergence can be a marker of social cooperation at the level of pulse, but not at the level of meter. The mapping of rhythmic convergence onto social affiliation or opposition is important at the early stages of language acquisition. The evolutionary origin of this faculty is possibly the need to transmit and perceive coalition information in social groups of human ancestors. We suggest that this faculty could promote the emergence of the speech faculty in humans.


Subject(s)
Biological Evolution , Cooperative Behavior , Interpersonal Relations , Social Perception , Verbal Behavior/physiology , Adolescent , Adult , Humans , Time Factors , Young Adult
12.
Ann N Y Acad Sci ; 1453(1): 5-11, 2019 10.
Article in English | MEDLINE | ID: mdl-31502260

ABSTRACT

Rhythm is fundamental to every motor activity. Neural and physiological mechanisms that underlie rhythmic cognition, in general, and rhythmic pattern generation, in particular, are evolutionarily ancient. As speech production is a kind of motor activity, investigating speech rhythm can provide insight into how general motor patterns have been adapted for more specific use in articulation and speech production. Studies on speech rhythm may further provide insight into the development of speech capacity in humans. As speech capacity is putatively a prerequisite for developing a language faculty, studies on speech rhythm may cast some light on the mystery of language evolution in the human genus. Hereby, we propose an approach to exploring speech rhythm as a window on speech emergence in ontogenesis and phylogenesis, as well as on diachronic linguistic changes.


Subject(s)
Culture , Language Development , Speech/physiology , Humans , Language , Periodicity
13.
Ann N Y Acad Sci ; 1453(1): 153-165, 2019 10.
Article in English | MEDLINE | ID: mdl-31373001

ABSTRACT

Regular rhythm facilitates audiomotor entrainment and synchronization in motor behavior and vocalizations between individuals. As rhythm entrainment between interacting agents is correlated with higher levels of cooperation and prosocial affiliative behavior, humans can potentially map regular speech rhythm onto higher cooperation and friendliness between interacting individuals. We tested this hypothesis at two rhythmic levels: pulse (recurrent acoustic events) and meter (hierarchical structuring of pulses based on their relative salience). We asked the listeners to make judgments of the hostile or collaborative attitude of two interacting agents who exhibit either regular or irregular pulse (Experiment 1) or meter (Experiment 2). The results confirmed a link between the perception of social affiliation and rhythmicity: evenly distributed pulses (vowel onsets) and consistent grouping of pulses into recurrent hierarchical patterns are more likely to be perceived as cooperation signals. People are more sensitive to regularity at the level of pulse than at the level of meter, and they are more confident when they associate cooperation with isochrony in pulse. The evolutionary origin of this faculty is possibly the need to transmit and perceive coalition information in social groups of human ancestors. We discuss the implications of these findings for the emergence of speech in humans.


Subject(s)
Periodicity , Social Behavior , Speech Perception/physiology , Speech/physiology , Adolescent , Adult , Female , Humans , Judgment/physiology , Language , Male , Multilingualism , Young Adult
14.
J Speech Lang Hear Res ; 62(4): 835-852, 2019 04 15.
Article in English | MEDLINE | ID: mdl-30969888

ABSTRACT

Purpose We investigated whether rhythm discrimination is mainly driven by the native language of the listener or by the fundamental design of the human auditory system and universal cognitive mechanisms shared by all people irrespective of rhythmic patterns in their native language. Method In multiple experiments, we asked participants to listen to 2 continuous acoustic sequences and to determine whether their rhythms were the same or different (AX discrimination). Participants were native speakers of 4 languages with different rhythmic properties (Spanish, French, English, and German) to understand whether the predominant rhythmic patterns of a native language affect sensitivity, bias, and reaction time in detecting rhythmic changes in linguistic (Experiment 2) and in nonlinguistic (Experiments 1 and 2) acoustic sequences. We examined sensitivity and bias measures, as well as reaction times. We also computed Bayes factors in order to assess the effect of native language. Results All listeners performed better (i.e., responded faster and manifested higher sensitivity and accuracy) when detecting the presence or absence of a rhythm change when the 1st stimulus in an AX test pair exhibited regular rhythm (i.e., a syllable-timed rhythmic pattern) than when the 1st stimulus exhibited irregular rhythm (i.e., stress-timed rhythmic pattern). This result pattern was observed both on linguistic and nonlinguistic stimuli and was not modulated by the native language of the participant. Conclusion We conclude that rhythm change detection is a fundamental function of a processing system that relies on general auditory mechanisms and is not modulated by linguistic experience.


Subject(s)
Hearing/physiology , Language , Phonetics , Speech Perception , Acoustic Stimulation , Adult , Bayes Theorem , Female , Humans , Male , Reaction Time , Young Adult
15.
Lang Speech ; 61(1): 84-96, 2018 03.
Article in English | MEDLINE | ID: mdl-28486862

ABSTRACT

Research has demonstrated distinct roles for consonants and vowels in speech processing. For example, consonants have been shown to support lexical processes, such as the segmentation of speech based on transitional probabilities (TPs), more effectively than vowels. Theory and data so far, however, have considered only non-tone languages, that is to say, languages that lack contrastive lexical tones. In the present work, we provide a first investigation of the role of consonants and vowels in statistical speech segmentation by native speakers of Cantonese, as well as assessing how tones modulate the processing of vowels. Results show that Cantonese speakers are unable to use statistical cues carried by consonants for segmentation, but they can use cues carried by vowels. This difference becomes more evident when considering tone-bearing vowels. Additional data from speakers of Russian and Mandarin suggest that the ability of Cantonese speakers to segment streams with statistical cues carried by tone-bearing vowels extends to other tone languages, but is much reduced in speakers of non-tone languages.


Subject(s)
Cues , Models, Statistical , Phonetics , Pitch Perception , Speech Acoustics , Speech Perception , Voice Quality , Acoustic Stimulation , Adolescent , Adult , Female , Humans , Male , Pitch Discrimination , Young Adult
16.
Lang Speech ; 60(3): 333-355, 2017 09.
Article in English | MEDLINE | ID: mdl-28915779

ABSTRACT

We investigated the independent contribution of speech rate and speech rhythm to perceived foreign accent. To address this issue we used a resynthesis technique that allows neutralizing segmental and tonal idiosyncrasies between identical sentences produced by French learners of English at different proficiency levels and maintaining the idiosyncrasies pertaining to prosodic timing patterns. We created stimuli that (1) preserved the idiosyncrasies in speech rhythm while controlling for the differences in speech rate between the utterances; (2) preserved the idiosyncrasies in speech rate while controlling for the differences in speech rhythm between the utterances; and (3) preserved the idiosyncrasies both in speech rate and speech rhythm. All the stimuli were created in intoned (with imposed intonational contour) and flat (with monotonized, constant F0) conditions. The original and the resynthesized sentences were rated by native speakers of English for degree of foreign accent. We found that both speech rate and speech rhythm influence the degree of perceived foreign accent, but the effect of speech rhythm is larger than that of speech rate. We also found that intonation enhances the perception of fine differences in rhythmic patterns but reduces the perceptual salience of fine differences in speech rate.


Subject(s)
Multilingualism , Periodicity , Pitch Perception , Speech Acoustics , Speech Perception , Voice Quality , Acoustic Stimulation , Cues , Humans , Judgment , Time Factors
17.
J Speech Lang Hear Res ; 60(6): 1493-1506, 2017 06 10.
Article in English | MEDLINE | ID: mdl-28586823

ABSTRACT

Purpose: We investigated cross-linguistic differences in fundamental frequency range (FFR) in Welsh-English bilingual speech. This is the first study that reports gender-specific behavior in switching FFRs across languages in bilingual speech. Method: FFR was conceptualized as a behavioral pattern using measures of span (range of fundamental frequency-in semitones-covered by the speaker's voice) and level (overall height of fundamental frequency maxima, minima, and means of speaker's voice) in each language. Results: FFR measures were taken from recordings of 30 Welsh-English bilinguals (14 women and 16 men), who read 70 semantically matched sentences, 35 in each language. Comparisons were made within speakers across languages, separately in male and female speech. Language background and language use information was elicited for qualitative analysis of extralinguistic factors that might affect the FFR. Conclusions: Cross-linguistic differences in FFR were found to be consistent across female bilinguals but random across male bilinguals. Most female bilinguals showed distinct FFRs for each language. Most male bilinguals, however, were found not to change their FFR when switching languages. Those who did change used different strategies than women when differentiating FFRs between languages. Detected cross-linguistic differences in FFR can be explained by sociocultural factors. Therefore, sociolinguistic factors are to be taken into account in any further study of language-specific pitch setting and cross-linguistic differences in FFR.


Subject(s)
Linguistics , Multilingualism , Speech Acoustics , Adult , Female , Humans , Male , Reading , Sex Factors , Speech Production Measurement , Young Adult
18.
Mem Cognit ; 45(5): 863-876, 2017 07.
Article in English | MEDLINE | ID: mdl-28290103

ABSTRACT

It is widely accepted that duration can be exploited as phonological phrase final lengthening in the segmentation of a novel language, i.e., in extracting discrete constituents from continuous speech. The use of final lengthening for segmentation and its facilitatory effect has been claimed to be universal. However, lengthening in the world languages can also mark lexically stressed syllables. Stress-induced lengthening can potentially be in conflict with right edge phonological phrase boundary lengthening. Thus the processing of durational cues in segmentation can be dependent on the listener's linguistic background, e.g., on the specific correlates and unmarked location of lexical stress in the native language of the listener. We tested this prediction and found that segmentation by both German and Basque speakers is facilitated when lengthening is aligned with the word final syllable and is not affected by lengthening on either the penultimate or the antepenultimate syllables. Lengthening of the word final syllable, however, does not help Italian and Spanish speakers to segment continuous speech, and lengthening of the antepenultimate syllable impedes their performance. We have also found a facilitatory effect of penultimate lengthening on segmentation by Italians. These results confirm our hypothesis that processing of lengthening cues is not universal, and interpretation of lengthening as a phonological phrase final boundary marker in a novel language of exposure can be overridden by the phonology of lexical stress in the native language of the listener.


Subject(s)
Psycholinguistics , Speech Perception/physiology , Speech/physiology , Adult , Humans , Young Adult
19.
J Acoust Soc Am ; 138(3): EL199-204, 2015 Sep.
Article in English | MEDLINE | ID: mdl-26428813

ABSTRACT

Analysis of English rhythm in speech produced by children and adults revealed that speech rhythm becomes increasingly more stress-timed as language acquisition progresses. Children reach the adult-like target by 11 to 12 years. The employed speech elicitation paradigm ensured that the sentences produced by adults and children at different ages were comparable in terms of lexical content, segmental composition, and phonotactic complexity. Detected differences between child and adult rhythm and between rhythm in child speech at various ages cannot be attributed to acquisition of phonotactic language features or vocabulary, and indicate the development of language-specific phonetic timing in the course of acquisition.


Subject(s)
Child Language , Periodicity , Phonetics , Speech Acoustics , Acoustics , Adult , Age Factors , Child , Child, Preschool , Female , Humans , Male , Middle Aged , Pattern Recognition, Automated , Signal Processing, Computer-Assisted , Speech Production Measurement , Time Factors , Vocabulary
20.
J Acoust Soc Am ; 138(2): 533-44, 2015 Aug.
Article in English | MEDLINE | ID: mdl-26328670

ABSTRACT

The development of speech rhythm in second language (L2) acquisition was investigated. Speech rhythm was defined as durational variability that can be captured by the interval-based rhythm metrics. These metrics were used to examine the differences in durational variability between proficiency levels in L2 English spoken by French and German learners. The results reveal that durational variability increased as L2 acquisition progressed in both groups of learners. This indicates that speech rhythm in L2 English develops from more syllable-timed toward more stress-timed patterns irrespective of whether the native language of the learner is rhythmically similar to or different from the target language. Although both groups showed similar development of speech rhythm in L2 acquisition, there were also differences: German learners achieved a degree of durational variability typical of the target language, while French learners exhibited lower variability than native British speakers, even at an advanced proficiency level.


Subject(s)
Language , Multilingualism , Speech/physiology , Adolescent , Adult , Female , Humans , Male , Phonetics , Time Factors , Young Adult
SELECTION OF CITATIONS
SEARCH DETAIL
...