Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 9 de 9
Filter
Add more filters










Database
Language
Publication year range
1.
Child Dev ; 2024 Apr 02.
Article in English | MEDLINE | ID: mdl-38563146

ABSTRACT

Most language use is displaced, referring to past, future, or hypothetical events, posing the challenge of how children learn what words refer to when the referent is not physically available. One possibility is that iconic cues that imagistically evoke properties of absent referents support learning when referents are displaced. In an audio-visual corpus of caregiver-child dyads, English-speaking caregivers interacted with their children (N = 71, 24-58 months) in contexts in which the objects talked about were either familiar or unfamiliar to the child, and either physically present or displaced. The analysis of the range of vocal, manual, and looking behaviors caregivers produced suggests that caregivers used iconic cues especially in displaced contexts and for unfamiliar objects, using other cues when objects were present.

2.
Cogn Sci ; 46(10): e13203, 2022 10.
Article in English | MEDLINE | ID: mdl-36251421

ABSTRACT

Of the six possible orderings of the three main constituents of language (subject, verb, and object), two-SOV and SVO-are predominant cross-linguistically. Previous research using the silent gesture paradigm in which hearing participants produce or respond to gestures without speech has shown that different factors such as reversibility, salience, and animacy can affect the preferences for different orders. Here, we test whether participants' preferences for orders that are conditioned on the semantics of the event change depending on (i) the iconicity of individual gestural elements and (ii) the prior knowledge of a conventional lexicon. Our findings demonstrate the same preference for semantically conditioned word order found in previous studies, specifically that SOV and SVO are preferred differentially for different types of events. We do not find that iconicity of individual gestures affects participants' ordering preferences; however, we do find that learning a lexicon leads to a stronger preference for SVO-like orders overall. Finally, we compare our findings from English speakers, using an SVO-dominant language, with data from speakers of an SOV-dominant language, Turkish. We find that, while learning a lexicon leads to an increase in SVO preference for both sets of participants, this effect is mediated by language background and event type, suggesting that an interplay of factors together determines preferences for different ordering patterns. Taken together, our results support a view of word order as a gradient phenomenon responding to multiple biases.


Subject(s)
Gestures , Language , Humans , Learning , Semantics , Speech
3.
Cognition ; 228: 105206, 2022 11.
Article in English | MEDLINE | ID: mdl-35810511

ABSTRACT

Silent gesture studies, in which hearing participants from different linguistic backgrounds produce gestures to communicate events, have been used to test hypotheses about the cognitive biases that govern cross-linguistic word order preferences. In particular, the differential use of SOV and SVO order to communicate, respectively, extensional events (where the direct object exists independently of the event; e.g., girl throws ball) and intensional events (where the meaning of the direct object is potentially dependent on the verb; e.g., girl thinks of ball), has been suggested to represent a natural preference, demonstrated in improvisation contexts. However, natural languages tend to prefer systematic word orders, where a single order is used regardless of the event being communicated. We present a series of studies that investigate ordering preferences for SOV and SVO orders using an online forced-choice experiment, where English-speaking participants select orders for different events i) in the absence of conventions and ii) after learning event-order mappings in different frequencies in a regularisation experiment. Our results show that natural ordering preferences arise in the absence of conventions, replicating previous findings from production experiments. In addition, we show that participants regularise the input they learn in the manual modality in two ways, such that, while the preference for systematic order patterns increases through learning, it exists in competition with the natural ordering preference, that conditions order on the semantics of the event. Using our experimental data in a computational model of cultural transmission, we show that this pattern is expected to persist over generations, suggesting that we should expect to see evidence of semantically-conditioned word order variability in at least some languages.


Subject(s)
Language , Linguistics , Female , Gestures , Humans , Language Development , Learning , Semantics
4.
J Cogn ; 4(1): 38, 2021.
Article in English | MEDLINE | ID: mdl-34514309

ABSTRACT

In the last decade, a growing body of work has convincingly demonstrated that languages embed a certain degree of non-arbitrariness (mostly in the form of iconicity, namely the presence of imagistic links between linguistic form and meaning). Most of this previous work has been limited to assessing the degree (and role) of non-arbitrariness in the speech (for spoken languages) or manual components of signs (for sign languages). When approached in this way, non-arbitrariness is acknowledged but still considered to have little presence and purpose, showing a diachronic movement towards more arbitrary forms. However, this perspective is limited as it does not take into account the situated nature of language use in face-to-face interactions, where language comprises categorical components of speech and signs, but also multimodal cues such as prosody, gestures, eye gaze etc. We review work concerning the role of context-dependent iconic and indexical cues in language acquisition and processing to demonstrate the pervasiveness of non-arbitrary multimodal cues in language use and we discuss their function. We then move to argue that the online omnipresence of multimodal non-arbitrary cues supports children and adults in dynamically developing situational models.

6.
Cogn Sci ; 45(7): e13014, 2021 07.
Article in English | MEDLINE | ID: mdl-34288069

ABSTRACT

Silent gestures consist of complex multi-articulatory movements but are now primarily studied through categorical coding of the referential gesture content. The relation of categorical linguistic content with continuous kinematics is therefore poorly understood. Here, we reanalyzed the video data from a gestural evolution experiment (Motamedi, Schouwstra, Smith, Culbertson, & Kirby, 2019), which showed increases in the systematicity of gesture content over time. We applied computer vision techniques to quantify the kinematics of the original data. Our kinematic analyses demonstrated that gestures become more efficient and less complex in their kinematics over generations of learners. We further detect the systematicity of gesture form on the level of thegesture kinematic interrelations, which directly scales with the systematicity obtained on semantic coding of the gestures. Thus, from continuous kinematics alone, we can tap into linguistic aspects that were previously only approachable through categorical coding of meaning. Finally, going beyond issues of systematicity, we show how unique gesture kinematic dialects emerged over generations as isolated chains of participants gradually diverged over iterations from other chains. We, thereby, conclude that gestures can come to embody the linguistic system at the level of interrelationships between communicative tokens, which should calibrate our theories about form and linguistic content.


Subject(s)
Gestures , Language , Biomechanical Phenomena , Humans , Language Development , Linguistics
7.
Dev Sci ; 24(3): e13066, 2021 05.
Article in English | MEDLINE | ID: mdl-33231339

ABSTRACT

A key question in developmental research concerns how children learn associations between words and meanings in their early language development. Given a vast array of possible referents, how does the child know what a word refers to? We contend that onomatopoeia (e.g. knock, meow), where a word's sound evokes the sound properties associated with its meaning, are particularly useful in children's early vocabulary development, offering a link between word and sensory experience not present in arbitrary forms. We suggest that, because onomatopoeia evoke imagery of the referent, children can draw from sensory experience to easily link onomatopoeic words to meaning, both when the referent is present as well as when it is absent. We use two sources of data: naturalistic observations of English-speaking caregiver-child interactions from 14 up to 54 months, to establish whether these words are present early in caregivers' speech to children, and experimental data to test whether English-speaking children can learn from onomatopoeia when it is present. Our results demonstrate that onomatopoeia: (a) are most prevalent in early child-directed language and in children's early productions, (b) are learnt more easily by children compared with non-iconic forms and (c) are used by caregivers in contexts where they can support communication and facilitate word learning.


Subject(s)
Language Development , Symbolism , Child , Humans , Language , Verbal Learning , Vocabulary
8.
Cognition ; 192: 103964, 2019 11.
Article in English | MEDLINE | ID: mdl-31302362

ABSTRACT

Recent work on emerging sign languages provides evidence for how key properties of linguistic systems are created. Here we use laboratory experiments to investigate the contribution of two specific mechanisms-interaction and transmission-to the emergence of a manual communication system in silent gesturers. We show that the combined effects of these mechanisms, rather than either alone, maintain communicative efficiency, and lead to a gradual increase of regularity and systematic structure. The gestures initially produced by participants are unsystematic and resemble pantomime, but come to develop key language-like properties similar to those documented in newly emerging sign systems.


Subject(s)
Gestures , Linguistics , Sign Language , Adolescent , Adult , Humans , Young Adult
9.
Behav Brain Sci ; 40: e65, 2017 01.
Article in English | MEDLINE | ID: mdl-29342521

ABSTRACT

Understanding the relationship between gesture, sign, and speech offers a valuable tool for investigating how language emerges from a nonlinguistic state. We propose that the focus on linguistic status is problematic, and a shift to focus on the processes that shape these systems serves to explain the relationship between them and contributes to the central question of how language evolves.


Subject(s)
Language Development , Sign Language , Gestures , Humans , Language , Speech
SELECTION OF CITATIONS
SEARCH DETAIL
...