Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 21
Filter
Add more filters










Publication year range
1.
Appl Linguist Rev ; 15(1): 309-333, 2024 Jan.
Article in English | MEDLINE | ID: mdl-38221976

ABSTRACT

Hearing parents with deaf children face difficult decisions about what language(s) to use with their child. Sign languages such as American Sign Language (ASL) are fully accessible to deaf children, yet most hearing parents are not proficient in ASL prior to having a deaf child. Parents are often discouraged from learning ASL based in part on an assumption that it will be too difficult, yet there is little evidence supporting this claim. In this mixed-methods study, we surveyed hearing parents of deaf children (n = 100) who had learned ASL to learn more about their experiences. In their survey responses, parents identified a range of resources that supported their ASL learning as well as frequent barriers. Parents identified strongly with belief statements indicating the importance of ASL and affirmed that learning ASL is attainable for hearing parents. We discuss the implications of this study for parents who are considering ASL as a language choice and for the professionals who guide them.

2.
J Speech Lang Hear Res ; 66(4): 1291-1308, 2023 04 12.
Article in English | MEDLINE | ID: mdl-36972338

ABSTRACT

PURPOSE: The purpose of this study is to determine whether and how learning American Sign Language (ASL) is associated with spoken English skills in a sample of ASL-English bilingual deaf and hard of hearing (DHH) children. METHOD: This cross-sectional study of vocabulary size included 56 DHH children between 8 and 60 months of age who were learning both ASL and spoken English and had hearing parents. English and ASL vocabulary were independently assessed via parent report checklists. RESULTS: ASL vocabulary size positively correlated with spoken English vocabulary size. Spoken English vocabulary sizes in the ASL-English bilingual DHH children in the present sample were comparable to those in previous reports of monolingual DHH children who were learning only English. ASL-English bilingual DHH children had total vocabularies (combining ASL and English) that were equivalent to same-age hearing monolingual children. Children with large ASL vocabularies were more likely to have spoken English vocabularies in the average range based on norms for hearing monolingual children. CONCLUSIONS: Contrary to predictions often cited in the literature, acquisition of sign language does not harm spoken vocabulary acquisition. This retrospective, correlational study cannot determine whether there is a causal relationship between sign language and spoken language vocabulary acquisition, but if a causal relationship exists, the evidence here suggests that the effect would be positive. Bilingual DHH children have age-expected vocabularies when considering the entirety of their language skills. We found no evidence to support recommendations that families with DHH children avoid learning sign language. Rather, our findings show that children with early ASL exposure can develop age-appropriate vocabulary skills in both ASL and spoken English.


Subject(s)
Deafness , Sign Language , Child , Humans , Retrospective Studies , Cross-Sectional Studies , Language , Vocabulary , Language Development
3.
Front Psychol ; 13: 920729, 2022.
Article in English | MEDLINE | ID: mdl-36092032

ABSTRACT

Iconic signs are overrepresented in the vocabularies of young deaf children, but it is unclear why. It is possible that iconic signs are easier for children to learn, but it is also possible that adults use iconic signs in child-directed signing in ways that make them more learnable, either by using them more often than less iconic signs or by lengthening them. We analyzed videos of naturalistic play sessions between parents and deaf children (n = 24 dyads) aged 9-60 months. To determine whether iconic signs are overrepresented during child-directed signing, we compared the iconicity of actual parent productions to the iconicity of simulated vocabularies designed to estimate chance levels of iconicity. For almost all dyads, parent sign types and tokens were not more iconic than the simulated vocabularies, suggesting that parents do not select more iconic signs during child-directed signing. To determine whether iconic signs are more likely to be lengthened, we ran a linear regression predicting sign duration, and found an interaction between age and iconicity: while parents of younger children produced non-iconic and iconic signs with similar durations, parents of older children produced non-iconic signs with shorter durations than iconic signs. Thus, parents sign more quickly with older children than younger children, and iconic signs appear to resist that reduction in sign length. It is possible that iconic signs are perceptually available longer, and their availability is a candidate hypothesis as to why iconic signs are overrepresented in children's vocabularies.

4.
Dev Sci ; 25(3): e13166, 2022 05.
Article in English | MEDLINE | ID: mdl-34355837

ABSTRACT

Word learning in young children requires coordinated attention between language input and the referent object. Current accounts of word learning are based on spoken language, where the association between language and objects occurs through simultaneous and multimodal perception. In contrast, deaf children acquiring American Sign Language (ASL) perceive both linguistic and non-linguistic information through the visual mode. In order to coordinate attention to language input and its referents, deaf children must allocate visual attention optimally between objects and signs. We conducted two eye-tracking experiments to investigate how young deaf children allocate attention and process referential cues in order to fast-map novel signs to novel objects. Participants were deaf children learning ASL between the ages of 17 and 71 months. In Experiment 1, participants (n = 30) were presented with a novel object and a novel sign, along with a referential cue that occurred either before or after the sign label. In Experiment 2, a new group of participants (n = 32) were presented with two novel objects and a novel sign, so that the referential cue was critical for identifying the target object. Across both experiments, participants showed evidence for fast-mapping the signs regardless of the timing of the referential cue. Individual differences in children's allocation of attention during exposure were correlated with their ability to fast-map the novel signs at test. This study provides first evidence for fast-mapping in sign language, and contributes to theoretical accounts of how word learning develops when all input occurs in the visual modality.


Subject(s)
Learning , Sign Language , Child , Child, Preschool , Humans , Infant , Language Development , Linguistics , Verbal Learning
5.
J Mem Lang ; 1262022 Oct.
Article in English | MEDLINE | ID: mdl-38665819

ABSTRACT

Previous research has pointed at communicative efficiency as a possible constraint on language structure. Here we investigated adjective position in American Sign Language (ASL), a language with relatively flexible word order, to test the incremental efficiency hypothesis, according to which both speakers and signers try to produce efficient referential expressions that are sensitive to the word order of their languages. The results of three experiments using a standard referential communication task confirmed that deaf ASL signers tend to produce absolute adjectives, such as color or material, in prenominal position, while scalar adjectives tend to be produced in prenominal position when expressed as lexical signs, but in postnominal position when expressed as classifiers. Age of ASL exposure also had an effect on referential choice, with early-exposed signers producing more classifiers than late-exposed signers, in some cases. Overall, our results suggest that linguistic, pragmatic and developmental factors affect referential choice in ASL, supporting the hypothesis that communicative efficiency is an important factor in shaping language structure and use.

6.
Cogn Sci ; 45(12): e13061, 2021 12.
Article in English | MEDLINE | ID: mdl-34861057

ABSTRACT

Across languages, children map words to meaning with great efficiency, despite a seemingly unconstrained space of potential mappings. The literature on how children do this is primarily limited to spoken language. This leaves a gap in our understanding of sign language acquisition, because several of the hypothesized mechanisms that children use are visual (e.g., visual attention to the referent), and sign languages are perceived in the visual modality. Here, we used the Human Simulation Paradigm in American Sign Language (ASL) to determine potential cues to word learning. Sign-naïve adult participants viewed video clips of parent-child interactions in ASL, and at a designated point, had to guess what ASL sign the parent produced. Across two studies, we demonstrate that referential clarity in ASL interactions is characterized by access to information about word class and referent presence (for verbs), similarly to spoken language. Unlike spoken language, iconicity is a cue to word meaning in ASL, although this is not always a fruitful cue. We also present evidence that verbs are highlighted well in the input, relative to spoken English. The results shed light on both similarities and differences in the information that learners may have access to in acquiring signed versus spoken languages.


Subject(s)
Language Development , Sign Language , Adult , Humans , Language , Parents , Verbal Learning
7.
J Pediatr ; 232: 229-236, 2021 05.
Article in English | MEDLINE | ID: mdl-33482219

ABSTRACT

OBJECTIVE: To examine whether children who are deaf or hard of hearing who have hearing parents can develop age-level vocabulary skills when they have early exposure to a sign language. STUDY DESIGN: This cross-sectional study of vocabulary size included 78 children who are deaf or hard of hearing between 8 and 68 months of age who were learning American Sign Language (ASL) and had hearing parents. Children who were exposed to ASL before 6 months of age or between 6 and 36 months of age were compared with a reference sample of 104 deaf and hard of hearing children who have parents who are deaf and sign. RESULTS: Deaf and hard of hearing children with hearing parents who were exposed to ASL in the first 6 months of life had age-expected receptive and expressive vocabulary growth. Children who had a short delay in ASL exposure had relatively smaller expressive but not receptive vocabulary sizes, and made rapid gains. CONCLUSIONS: Although hearing parents generally learn ASL alongside their children who are deaf, their children can develop age-expected vocabulary skills when exposed to ASL during infancy. Children who are deaf with hearing parents can predictably and consistently develop age-level vocabularies at rates similar to native signers; early vocabulary skills are robust predictors of development across domains.


Subject(s)
Child Language , Deafness/psychology , Sign Language , Vocabulary , Child, Preschool , Cross-Sectional Studies , Female , Hearing , Humans , Infant , Linear Models , Male , Parents
8.
J Exp Psychol Hum Percept Perform ; 46(11): 1397-1410, 2020 Nov.
Article in English | MEDLINE | ID: mdl-32940493

ABSTRACT

Deaf signers exhibit an enhanced ability to process information in their peripheral visual field, particularly the motion of dots or orientation of lines. Does their experience processing sign language, which involves identifying meaningful visual forms across the visual field, contribute to this enhancement? We tested whether deaf signers recruit language knowledge to facilitate peripheral identification through a sign superiority effect (i.e., better handshape discrimination in a sign than a pseudosign) and whether such a superiority effect might be responsible for perceptual enhancements relative to hearing individuals (i.e., a decrease in the effect of eccentricity on perceptual identification). Deaf signers and hearing signers or nonsigners identified the handshape presented within a static ASL fingerspelling letter (Experiment 1), fingerspelled sequence (Experiment 2), or sign or pseudosign (Experiment 3) presented in the near or far periphery. Accuracy on all tasks was higher for deaf signers than hearing nonsigning participants and was higher in the near than the far periphery. Across experiments, there were different patterns of interactions between hearing status and eccentricity depending on the type of stimulus; deaf signers showed an effect of eccentricity for static fingerspelled letters, fingerspelled sequences, and pseudosigns but not for ASL signs. In contrast, hearing nonsigners showed an effect of eccentricity for all stimuli. Thus, deaf signers recruit lexical knowledge to facilitate peripheral perceptual identification, and this perceptual enhancement may derive from their extensive experience processing visual linguistic information in the periphery during sign comprehension. (PsycInfo Database Record (c) 2020 APA, all rights reserved).


Subject(s)
Deafness/physiopathology , Motion Perception/physiology , Pattern Recognition, Visual/physiology , Psycholinguistics , Sign Language , Visual Fields/physiology , Adult , Humans
9.
Behav Res Methods ; 52(5): 2071-2084, 2020 10.
Article in English | MEDLINE | ID: mdl-32180180

ABSTRACT

Vocabulary is a critical early marker of language development. The MacArthur Bates Communicative Development Inventory has been adapted to dozens of languages, and provides a bird's-eye view of children's early vocabularies which can be informative for both research and clinical purposes. We present an update to the American Sign Language Communicative Development Inventory (the ASL-CDI 2.0,  https://www.aslcdi.org ), a normed assessment of early ASL vocabulary that can be widely administered online by individuals with no formal training in sign language linguistics. The ASL-CDI 2.0 includes receptive and expressive vocabulary, and a Gestures and Phrases section; it also introduces an online interface that presents ASL signs as videos. We validated the ASL-CDI 2.0 with expressive and receptive in-person tasks administered to a subset of participants. The norming sample presented here consists of 120 deaf children (ages 9 to 73 months) with deaf parents. We present an analysis of the measurement properties of the ASL-CDI 2.0. Vocabulary increases with age, as expected. We see an early noun bias that shifts with age, and a lag between receptive and expressive vocabulary. We present these findings with indications for how the ASL-CDI 2.0 may be used in a range of clinical and research settings.


Subject(s)
Language Development , Sign Language , Vocabulary , Child , Child Language , Child, Preschool , Humans , Infant , Language , Language Tests , United States
10.
J Exp Child Psychol ; 193: 104793, 2020 05.
Article in English | MEDLINE | ID: mdl-31992441

ABSTRACT

In laboratory settings children are able to learn new words from overheard interactions, yet in naturalistic contexts this is often not the case. We investigated the degree to which joint attention within the overheard interaction facilitates overheard learning. In the study, 20 2-year-olds were tested on novel words they had been exposed to in two different overhearing contexts: one in which both interlocutors were attending to the interaction and one in which one interlocutor was not attending. Participants learned the new words only in the former condition, indicating that they did not learn when joint attention was absent. This finding demonstrates that not all overheard interactions are equally good for word learning; attentive interlocutors are crucial when learning words through overhearing.


Subject(s)
Attention/physiology , Language Development , Learning/physiology , Speech Perception/physiology , Child, Preschool , Female , Humans , Male
11.
Lang Learn Dev ; 16(4): 351-363, 2020.
Article in English | MEDLINE | ID: mdl-33505227

ABSTRACT

Parent input during interaction with young children varies across languages and contexts with regard to the relative number of words from different lexical categories, particularly nouns and verbs. Previous work has focused on spoken language input. Little is known about the lexical composition of parent input in American Sign Language (ASL). We investigated parent input in ASL in a sample of deaf mothers interacting with their young deaf children (n = 7) in a free play setting. Children ranged in age from 21 to 39 months (M = 31 months). A 20-minute portion of each interaction was transcribed and coded for a range of linguistic features in maternal input including utterance length, sign types and tokens, proportion of nouns and verbs, and functions of points. We found evidence for a significant verb bias in maternal input; mothers produced more verb tokens and unique verb types than any other word class. Verbs were produced more than twice as often as nouns (36% vs 17% of all tokens) and appeared in a higher proportion of utterances than nouns (57% vs. 31% of all utterances). Points were frequent in the input, often serving as pronouns replacing common or proper nouns. Maternal noun and verb tokens increased in frequency with child age and vocabulary. These findings provide an initial step in understanding the lexical properties of maternal input during free play interactions in ASL.

12.
Lang Learn ; 70(4): 935-973, 2020 Dec.
Article in English | MEDLINE | ID: mdl-33510545

ABSTRACT

Children learning language efficiently process single words, and activate semantic, phonological, and other features of words during recognition. We investigated lexical recognition in deaf children acquiring American Sign Language (ASL) to determine how perceiving language in the visual-spatial modality affects lexical recognition. Twenty native- or early-exposed signing deaf children (ages 4 to 8 years) participated in a visual world eye-tracking study. Children were presented with a single ASL sign, target picture, and three competitor pictures that varied in their phonological and semantic relationship to the target. Children shifted gaze to the target picture shortly after sign offset. Children showed robust evidence for activation of semantic but not phonological features of signs, however in their behavioral responses children were most susceptible to phonological competitors. Results demonstrate that single word recognition in ASL is largely parallel to spoken language recognition among children who are developing a mature lexicon.

13.
J Cult Cogn Sci ; 3(2): 217-234, 2019 Nov.
Article in English | MEDLINE | ID: mdl-32405616

ABSTRACT

When processing spoken language sentences, listeners continuously make and revise predictions about the upcoming linguistic signal. In contrast, during comprehension of American Sign Language (ASL), signers must simultaneously attend to the unfolding linguistic signal and the surrounding scene via the visual modality. This may affect how signers activate potential lexical candidates and allocate visual attention as a sentence unfolds. To determine how signers resolve referential ambiguity during real-time comprehension of ASL adjectives and nouns, we presented deaf adults (n = 18, 19-61 years) and deaf children (n = 20, 4-8 years) with videos of ASL sentences in a visual world paradigm. Sentences had either an adjective-noun ("SEE YELLOW WHAT? FLOWER") or a noun-adjective ("SEE FLOWER WHICH? YELLOW") structure. The degree of ambiguity in the visual scene was manipulated at the adjective and noun levels (i.e., including one or more yellow items and one or more flowers in the visual array). We investigated effects of ambiguity and word order on target looking at early and late points in the sentence. Analysis revealed that adults and children made anticipatory looks to a target when it could be identified early in the sentence. Further, signers looked more to potential lexical candidates than to unrelated competitors in the early window, and more to matched than unrelated competitors in the late window. Children's gaze patterns largely aligned with those of adults with some divergence. Together, these findings suggest that signers allocate referential attention strategically based on the amount and type of ambiguity at different points in the sentence when processing adjectives and nouns in ASL.

14.
Lang Cogn Neurosci ; 33(4): 387-401, 2018.
Article in English | MEDLINE | ID: mdl-29687014

ABSTRACT

Prediction during sign language comprehension may enable signers to integrate linguistic and non-linguistic information within the visual modality. In two eyetracking experiments, we investigated American Sign language (ASL) semantic prediction in deaf adults and children (aged 4-8 years). Participants viewed ASL sentences in a visual world paradigm in which the sentence-initial verb was either neutral or constrained relative to the sentence-final target noun. Adults and children made anticipatory looks to the target picture before the onset of the target noun in the constrained condition only, showing evidence for semantic prediction. Crucially, signers alternated gaze between the stimulus sign and the target picture only when the sentential object could be predicted from the verb. Signers therefore engage in prediction by optimizing visual attention between divided linguistic and referential signals. These patterns suggest that prediction is a modality-independent process, and theoretical implications are discussed.

15.
J Exp Psychol Learn Mem Cogn ; 42(12): 2002-2006, 2016 12.
Article in English | MEDLINE | ID: mdl-27929337

ABSTRACT

In this reply to Salverda (2016), we address a critique of the claims made in our recent study of real-time processing of American Sign Language (ASL) signs using a novel visual world eye-tracking paradigm (Lieberman, Borovsky, Hatrak, & Mayberry, 2015). Salverda asserts that our data do not support our conclusion that native signers and late-learning signers show variable patterns of activation in the presence of phonological competitors. We provide a logical rationale for our study design and present a reanalysis of our data using a modified time window, providing additional evidence for our claim. We maintain that target fixation patterns provide an important window into real-time processing of sign language. We conclude that the use of eye-tracking methods to study real-time processing in a visually perceived language such as ASL is a promising avenue for further exploration. (PsycINFO Database Record


Subject(s)
Sign Language , Time Perception , Humans , Language , Learning , Linguistics , United States
16.
J Educ (Boston) ; 196(1): 9-18, 2016 Jan.
Article in English | MEDLINE | ID: mdl-32782418

ABSTRACT

Deaf children have traditionally been perceived and educated as a special needs population. Over the past several decades, several factors have converged to enable a shift in perspective to one in which deaf children are viewed as a cultural and linguistic minority, and the education of deaf children is approached from a bilingual framework. In this article, we present the historical context in which such shifts in perspective have taken place and describe the linguistic, social, and cultural factors that shape a bilingual approach to deaf education. We further discuss the implications of a linguistic and cultural minority perspective of deaf children on language development, teacher preparation, and educational policy.

17.
Appl Psycholinguist ; 36(4): 855-873, 2015 Jul 01.
Article in English | MEDLINE | ID: mdl-26166917

ABSTRACT

Visual attention is a necessary prerequisite to successful communication in sign language. The current study investigated the development of attention-getting skills in deaf native-signing children during interactions with peers and teachers. Seven deaf children (aged 21-39 months) and five adults were videotaped during classroom activities for approximately 30 hr. Interactions were analyzed in depth to determine how children obtained and maintained attention. Contrary to previous reports, children were found to possess a high level of communicative competence from an early age. Analysis of peer interactions revealed that children used a range of behaviors to obtain attention with peers, including taps, waves, objects, and signs. Initiations were successful approximately 65% of the time. Children followed up failed initiation attempts by repeating the initiation, using a new initiation, or terminating the interaction. Older children engaged in longer and more complex interactions than younger children. Children's early exposure to and proficiency in American Sign Language is proposed as a likely mechanism that facilitated their communicative competence.

18.
J Exp Psychol Learn Mem Cogn ; 41(4): 1130-9, 2015 Jul.
Article in English | MEDLINE | ID: mdl-25528091

ABSTRACT

Sign language comprehension requires visual attention to the linguistic signal and visual attention to referents in the surrounding world, whereas these processes are divided between the auditory and visual modalities for spoken language comprehension. Additionally, the age-onset of first language acquisition and the quality and quantity of linguistic input for deaf individuals is highly heterogeneous, which is rarely the case for hearing learners of spoken languages. Little is known about how these modality and developmental factors affect real-time lexical processing. In this study, we ask how these factors impact real-time recognition of American Sign Language (ASL) signs using a novel adaptation of the visual world paradigm in deaf adults who learned sign from birth (Experiment 1), and in deaf adults who were late-learners of ASL (Experiment 2). Results revealed that although both groups of signers demonstrated rapid, incremental processing of ASL signs, only native signers demonstrated early and robust activation of sublexical features of signs during real-time recognition. Our findings suggest that the organization of the mental lexicon into units of both form and meaning is a product of infant language learning and not the sensory and motor modality through which the linguistic signal is sent and received.


Subject(s)
Language Development , Pattern Recognition, Visual , Recognition, Psychology , Sign Language , Adolescent , Adult , Age Factors , Deafness , Eye Movement Measurements , Eye Movements , Female , Humans , Language Tests , Male , Middle Aged , Time Factors , Young Adult
19.
Lang Learn Dev ; 10(1)2014 Jan 01.
Article in English | MEDLINE | ID: mdl-24363628

ABSTRACT

Joint attention between hearing children and their caregivers is typically achieved when the adult provides spoken, auditory linguistic input that relates to the child's current visual focus of attention. Deaf children interacting through sign language must learn to continually switch visual attention between people and objects in order to achieve the classic joint attention characteristic of young hearing children. The current study investigated the mechanisms used by sign language dyads to achieve joint attention within a single modality. Four deaf children, ages 1;9 to 3;7, were observed during naturalistic interactions with their deaf mothers. The children engaged in frequent and meaningful gaze shifts, and were highly sensitive to a range of maternal cues. Children's control of gaze in this sample was largely developed by age two. The gaze patterns observed in deaf children were not observed in a control group of hearing children, indicating that modality-specific patterns of joint attention behaviors emerge when the language of parent-infant interaction occurs in the visual mode.

SELECTION OF CITATIONS
SEARCH DETAIL
...