Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 23
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
Front Psychol ; 13: 806471, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-35369213

RESUMO

Over the history of research on sign languages, much scholarship has highlighted the pervasive presence of signs whose forms relate to their meaning in a non-arbitrary way. The presence of these forms suggests that sign language vocabularies are shaped, at least in part, by a pressure toward maintaining a link between form and meaning in wordforms. We use a vector space approach to test the ways this pressure might shape sign language vocabularies, examining how non-arbitrary forms are distributed within the lexicons of two unrelated sign languages. Vector space models situate the representations of words in a multi-dimensional space where the distance between words indexes their relatedness in meaning. Using phonological information from the vocabularies of American Sign Language (ASL) and British Sign Language (BSL), we tested whether increased similarity between the semantic representations of signs corresponds to increased phonological similarity. The results of the computational analysis showed a significant positive relationship between phonological form and semantic meaning for both sign languages, which was strongest when the sign language lexicons were organized into clusters of semantically related signs. The analysis also revealed variation in the strength of patterns across the form-meaning relationships seen between phonological parameters within each sign language, as well as between the two languages. This shows that while the connection between form and meaning is not entirely language specific, there are cross-linguistic differences in how these mappings are realized for signs in each language, suggesting that arbitrariness as well as cognitive or cultural influences may play a role in how these patterns are realized. The results of this analysis not only contribute to our understanding of the distribution of non-arbitrariness in sign language lexicons, but also demonstrate a new way that computational modeling can be harnessed in lexicon-wide investigations of sign languages.

2.
Sci Rep ; 11(1): 20001, 2021 10 08.
Artigo em Inglês | MEDLINE | ID: mdl-34625613

RESUMO

Infants readily extract linguistic rules from speech. Here, we ask whether this advantage extends to linguistic stimuli that do not rely on the spoken modality. To address this question, we first examine whether infants can differentially learn rules from linguistic signs. We show that, despite having no previous experience with a sign language, six-month-old infants can extract the reduplicative rule (AA) from dynamic linguistic signs, and the neural response to reduplicative linguistic signs differs from reduplicative visual controls, matched for the dynamic spatiotemporal properties of signs. We next demonstrate that the brain response for reduplicative signs is similar to the response to reduplicative speech stimuli. Rule learning, then, apparently depends on the linguistic status of the stimulus, not its sensory modality. These results suggest that infants are language-ready. They possess a powerful rule system that is differentially engaged by all linguistic stimuli, speech or sign.


Assuntos
Desenvolvimento da Linguagem , Encéfalo/fisiologia , Humanos , Lactente , Idioma , Aprendizagem , Língua de Sinais , Fala/fisiologia
3.
Psychol Sci ; 32(8): 1227-1237, 2021 08.
Artigo em Inglês | MEDLINE | ID: mdl-34240647

RESUMO

When we use our hands to estimate the length of a stick in the Müller-Lyer illusion, we are highly susceptible to the illusion. But when we prepare to act on sticks under the same conditions, we are significantly less susceptible. Here, we asked whether people are susceptible to illusion when they use their hands not to act on objects but to describe them in spontaneous co-speech gestures or conventional sign languages of the deaf. Thirty-two English speakers and 13 American Sign Language signers used their hands to act on, estimate the length of, and describe sticks eliciting the Müller-Lyer illusion. For both gesture and sign, the magnitude of illusion in the description task was smaller than the magnitude of illusion in the estimation task and not different from the magnitude of illusion in the action task. The mechanisms responsible for producing gesture in speech and sign thus appear to operate not on percepts involved in estimation but on percepts derived from the way we act on objects.


Assuntos
Ilusões , Gestos , Mãos , Humanos , Língua de Sinais , Fala
4.
Cognition ; 215: 104845, 2021 10.
Artigo em Inglês | MEDLINE | ID: mdl-34273677

RESUMO

The link between language and cognition is unique to our species and emerges early in infancy. Here, we provide the first evidence that this precocious language-cognition link is not limited to spoken language, but is instead sufficiently broad to include sign language, a language presented in the visual modality. Four- to six-month-old hearing infants, never before exposed to sign language, were familiarized to a series of category exemplars, each presented by a woman who either signed in American Sign Language (ASL) while pointing and gazing toward the objects, or pointed and gazed without language (control). At test, infants viewed two images: one, a new member of the now-familiar category; and the other, a member of an entirely new category. Four-month-old infants who observed ASL distinguished between the two test objects, indicating that they had successfully formed the object category; they were as successful as age-mates who listened to their native (spoken) language. Moreover, it was specifically the linguistic elements of sign language that drove this facilitative effect: infants in the control condition, who observed the woman only pointing and gazing failed to form object categories. Finally, the cognitive advantages of observing ASL quickly narrow in hearing infants: by 5- to 6-months, watching ASL no longer supports categorization, although listening to their native spoken language continues to do so. Together, these findings illuminate the breadth of infants' early link between language and cognition and offer insight into how it unfolds.


Assuntos
Idioma , Língua de Sinais , Percepção Auditiva , Feminino , Audição , Humanos , Lactente , Desenvolvimento da Linguagem
5.
Cognition ; 203: 104332, 2020 10.
Artigo em Inglês | MEDLINE | ID: mdl-32559513

RESUMO

Some concepts are more essential for human communication than others. In this paper, we investigate whether the concept of agent-backgrounding is sufficiently important for communication that linguistic structures for encoding this concept are present in young sign languages. Agent-backgrounding constructions serve to reduce the prominence of the agent - the English passive sentence a book was knocked over is an example. Although these constructions are widely attested cross-linguistically, there is little prior research on the emergence of such devices in new languages. Here we studied how agent-backgrounding constructions emerge in Nicaraguan Sign Language (NSL) and adult homesign systems. We found that NSL signers have innovated both lexical and morphological devices for expressing agent-backgrounding, indicating that conveying a flexible perspective on events has deep communicative value. At the same time, agent-backgrounding devices did not emerge at the same time as agentive devices. This result suggests that agent-backgrounding does not have the same core cognitive status as agency. The emergence of agent-backgrounding morphology appears to depend on receiving a linguistic system as input in which linguistic devices for expressing agency are already well-established.


Assuntos
Linguística , Língua de Sinais , Adulto , Comunicação , Humanos , Idioma , Desenvolvimento da Linguagem
6.
Cogn Sci ; 44(1): e12809, 2020 01.
Artigo em Inglês | MEDLINE | ID: mdl-31960502

RESUMO

Does knowledge of language transfer across language modalities? For example, can speakers who have had no sign language experience spontaneously project grammatical principles of English to American Sign Language (ASL) signs? To address this question, here, we explore a grammatical illusion. Using spoken language, we first show that a single word with doubling (e.g., trafraf) can elicit conflicting linguistic responses, depending on the level of linguistic analysis (phonology vs. morphology). We next show that speakers with no command of a sign language extend these same principles to novel ASL signs. Remarkably, the morphological analysis of ASL signs depends on the morphology of participants' spoken language. Speakers of Malayalam (a language with rich reduplicative morphology) prefer XX signs when doubling signals morphological plurality, whereas no such preference is seen in speakers of Mandarin (a language with no productive plural morphology). Our conclusions open up the possibility that some linguistic principles are amodal and abstract.


Assuntos
Idioma , Fala , Humanos , Conhecimento , Linguística , Língua de Sinais
7.
Front Psychol ; 11: 579992, 2020.
Artigo em Inglês | MEDLINE | ID: mdl-33519599

RESUMO

In this article, we analyze the grammatical incorporation of demonstratives in a tactile language, emerging in communities of DeafBlind signers in the US who communicate via reciprocal, tactile channels-a practice known as "protactile." In the first part of the paper, we report on a synchronic analysis of recent data, identifying four types of "taps," which have taken on different functions in protacitle language and communication. In the second part of the paper, we report on a diachronic analysis of data collected over the past 8 years. This analysis reveals the emergence of a new kind of "propriotactic" tap, which has been co-opted by the emerging phonological system of protactile language. We link the emergence of this unit to both demonstrative taps, and backchanneling taps, both of which emerged earlier. We show how these forms are all undergirded by an attention-modulation function, more or less backgrounded, and operating across different semiotic systems. In doing so, we contribute not only to what is known about demonstratives in tactile languages, but also to what is known about the role of demonstratives in the emergence of new languages.

8.
Cognition ; 180: 279-283, 2018 11.
Artigo em Inglês | MEDLINE | ID: mdl-30103208

RESUMO

Across languages, certain linguistic forms are systematically preferred to others (e.g. bla > lba). But whether these preferences concern abstract constraints on language structure, generally, or whether these restrictions only apply to speech is unknown. To address this question, here we ask whether linguistic constraints previously identified in spoken languages apply to signs. One such constraint, ANCHORING, restricts the structure of reduplicated forms (AB → ABB, not ABA). In two experiments, native ASL signers rated the acceptability of novel reduplicated forms that either violated ANCHORING (ABA) or obeyed it (ABB). In Experiment 1, signers made a forced choice between ABB and ABA forms; in Experiment 2, signers rated signs individually. Results showed that signers prefer signs that obey ANCHORING over ANCHORING violations (ABB > ABA). These findings show for the first time that ANCHORING is operative in ASL signers. These results suggest that some linguistic constraints are amodal, applying to both speech and signs.


Assuntos
Estimulação Luminosa/métodos , Língua de Sinais , Humanos
9.
Front Psychol ; 9: 770, 2018.
Artigo em Inglês | MEDLINE | ID: mdl-29904363

RESUMO

In signed and spoken language sentences, imperative mood and the corresponding speech acts such as for instance, command, permission or advice, can be distinguished by morphosyntactic structures, but also solely by prosodic cues, which are the focus of this paper. These cues can express paralinguistic mental states or grammatical meaning, and we show that in American Sign Language (ASL), they also exhibit the function, scope, and alignment of prosodic, linguistic elements of sign languages. The production and comprehension of prosodic facial expressions and temporal patterns therefore can shed light on how cues are grammaticalized in sign languages. They can also be informative about the formal semantic and pragmatic properties of imperative types not only in ASL, but also more broadly. This paper includes three studies: one of production (Study 1) and two of comprehension (Studies 2 and 3). In Study 1, six prosodic cues are analyzed in production: temporal cues of sign and hold duration, and non-manual cues including tilts of the head, head nods, widening of the eyes, and presence of mouthings. Results of Study 1 show that neutral sentences and commands are well distinguished from each other and from other imperative speech acts via these prosodic cues alone; there is more limited differentiation among explanation, permission, and advice. The comprehension of these five speech acts is investigated in Deaf ASL signers in Study 2, and in three additional groups in Study 3: Deaf signers of German Sign Language (DGS), hearing non-signers from the United States, and hearing non-signers from Germany. Results of Studies 2 and 3 show that the ASL group performs significantly better than the other 3 groups and that all groups perform above chance for all meaning types in comprehension. Language-specific knowledge, therefore, has a significant effect on identifying imperatives based on targeted cues. Command has the most cues associated with it and is the most accurately identified imperative type across groups-indicating, we suggest, its special status as the strongest imperative in terms of addressing the speaker's goals. Our findings support the view that the cues are accessible in their content across groups, but that their language-particular combinatorial possibilities and distribution within sentences provide an advantage to ASL signers in comprehension and attest to their prosodic status.

10.
Annu Rev Linguist ; 3: 363-388, 2017.
Artigo em Inglês | MEDLINE | ID: mdl-29034268

RESUMO

Language emergence describes moments in historical time when nonlinguistic systems become linguistic. Because language can be invented de novo in the manual modality, this offers insight into the emergence of language in ways that the oral modality cannot. Here we focus on homesign, gestures developed by deaf individuals who cannot acquire spoken language and have not been exposed to sign language. We contrast homesign with (a) gestures that hearing individuals produce when they speak, as these cospeech gestures are a potential source of input to homesigners, and (b) established sign languages, as these codified systems display the linguistic structure that homesign has the potential to assume. We find that the manual modality takes on linguistic properties, even in the hands of a child not exposed to a language model. But it grows into full-blown language only with the support of a community that transmits the system to the next generation.

11.
Behav Brain Sci ; 40: e46, 2017 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-26434499

RESUMO

How does sign language compare with gesture, on the one hand, and spoken language on the other? Sign was once viewed as nothing more than a system of pictorial gestures without linguistic structure. More recently, researchers have argued that sign is no different from spoken language, with all of the same linguistic structures. The pendulum is currently swinging back toward the view that sign is gestural, or at least has gestural components. The goal of this review is to elucidate the relationships among sign language, gesture, and spoken language. We do so by taking a close look not only at how sign has been studied over the past 50 years, but also at how the spontaneous gestures that accompany speech have been studied. We conclude that signers gesture just as speakers do. Both produce imagistic gestures along with more categorical signs or words. Because at present it is difficult to tell where sign stops and gesture begins, we suggest that sign should not be compared with speech alone but should be compared with speech-plus-gesture. Although it might be easier (and, in some cases, preferable) to blur the distinction between sign and gesture, we argue that distinguishing between sign (or speech) and gesture is essential to predict certain types of learning and allows us to understand the conditions under which gesture takes on properties of sign, and speech takes on properties of gesture. We end by calling for new technology that may help us better calibrate the borders between sign and gesture.


Assuntos
Gestos , Língua de Sinais , Fala/classificação , Humanos , Desenvolvimento da Linguagem , Aprendizagem/fisiologia , Fala/fisiologia
12.
Lang Acquis ; 24(4): 283-306, 2017.
Artigo em Inglês | MEDLINE | ID: mdl-33033424

RESUMO

In this paper two dimensions of handshape complexity are analyzed as potential building blocks of phonological contrast-joint complexity and finger group complexity. We ask whether sign language patterns are elaborations of those seen in the gestures produced by hearing people without speech (pantomime) or a more radical re-organization of them. Data from adults and children are analyzed to address issues of cross-linguistic variation, emergence, and acquisition. Study 1 addresses these issues in adult signers and gesturers from the United States, Italy, China, and Nicaragua. Study 2 addresses these issues in child and adult groups (signers and gesturers) from the United States, Italy, and Nicaragua. We argue that handshape undergoes a fairly radical reorganization, including loss and reorganization of iconicity and feature redistribution, as phonologization takes place in both of these dimensions. Moreover, while the patterns investigated here are not evidence of duality of patterning, we conclude that they are indeed phonological, and that they appear earlier than related morphosyntactic patterns that use the same types of handshape.

13.
Behav Brain Sci ; 40: e74, 2017 01.
Artigo em Inglês | MEDLINE | ID: mdl-29342529

RESUMO

The commentaries have led us to entertain expansions of our paradigm to include new theoretical questions, new criteria for what counts as a gesture, and new data and populations to study. The expansions further reinforce the approach we took in the target article: namely, that linguistic and gestural components are two distinct yet integral sides of communication, which need to be studied together.


Assuntos
Gestos , Idioma , Humanos , Desenvolvimento da Linguagem , Linguística
14.
Proc Natl Acad Sci U S A ; 113(48): 13702-13707, 2016 11 29.
Artigo em Inglês | MEDLINE | ID: mdl-27837021

RESUMO

Does knowledge of language consist of abstract principles, or is it fully embodied in the sensorimotor system? To address this question, we investigate the double identity of doubling (e.g., slaflaf, or generally, XX; where X stands for a phonological constituent). Across languages, doubling is known to elicit conflicting preferences at different levels of linguistic analysis (phonology vs. morphology). Here, we show that these preferences are active in the brains of individual speakers, and they are demonstrably distinct from sensorimotor pressures. We first demonstrate that doubling in novel English words elicits divergent percepts: Viewed as meaningless (phonological) forms, doubling is disliked (e.g., slaflaf < slafmak), but once doubling in form is systematically linked to meaning (e.g., slaf = ball, slaflaf = balls), the doubling aversion shifts into a reliable (morphological) preference. We next show that sign-naive speakers spontaneously project these principles to novel signs in American Sign Language, and their capacity to do so depends on the structure of their spoken language (English vs. Hebrew). These results demonstrate that linguistic preferences doubly dissociate from sensorimotor demands: A single stimulus can elicit diverse percepts, yet these percepts are invariant across stimulus modality--for speech and signs. These conclusions are in line with the possibility that some linguistic principles are abstract, and they apply broadly across language modality.


Assuntos
Encéfalo/fisiologia , Idioma , Córtex Sensório-Motor/fisiologia , Fala/fisiologia , Adulto , Feminino , Humanos , Conhecimento , Masculino , Fonética , Língua de Sinais
15.
Appl Psycholinguist ; 37(2): 411-434, 2016 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-27057073

RESUMO

Orthographic experience during the acquisition of novel words may influence production processing in proficient readers. Previous work indicates interactivity among lexical, phonological, and articulatory processing; we hypothesized that experience with orthography can also influence phonological processing. Phonetic accuracy and articulatory stability were measured as adult, proficient readers repeated and read aloud nonwords, presented in auditory or written modalities and with variations in orthographic neighborhood density. Accuracy increased when participants had read the nonwords earlier in the session, but not when they had only heard them. Articulatory stability increased with practice, regardless of whether nonwords were read or heard. Word attack skills, but not reading comprehension, predicted articulatory stability. Findings indicate that kinematic and phonetic accuracy analyses provide insight into how orthography influences implicit language processing.

16.
Top Cogn Sci ; 7(1): 95-123, 2015 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-25529989

RESUMO

In this paper the cognitive, cultural, and linguistic bases for a pattern of conventionalization of two types of iconic handshapes are described. Work on sign languages has shown that handling handshapes (H-HSs: those that represent how objects are handled or manipulated) and object handshapes (O-HSs: those that represent the class, size, or shape of objects) express an agentive/non-agentive semantic distinction in many sign languages. H-HSs are used in agentive event descriptions and O-HSs are used in non-agentive event descriptions. In this work, American Sign Language (ASL) and Italian Sign Language (LIS) productions are compared (adults and children) as well as the corresponding groups of gesturers in each country using "silent gesture." While the gesture groups, in general, did not employ an H-HS/O-HS distinction, all participants (signers and gesturers) used iconic handshapes (H-HSs and O-HSs together) more often in agentive than in no-agent event descriptions; moreover, none of the subjects produced an opposite pattern than the expected one (i.e., H-HSs associated with no-agent descriptions and O-HSs associated with agentive ones). These effects are argued to be grounded in cognition. In addition, some individual gesturers were observed to produce the H-HS/O-HS opposition for agentive and non-agentive event descriptions-that is, more Italian than American adult gesturers. This effect is argued to be grounded in culture. Finally, the agentive/non-agentive handshape opposition is confirmed for signers of ASL and LIS, but previously unreported cross-linguistic differences were also found across both adult and child sign groups. It is, therefore, concluded that cognitive, cultural, and linguistic factors contribute to the conventionalization of this distinction of handshape type.


Assuntos
Cognição/fisiologia , Cultura , Gestos , Linguística , Língua de Sinais , Adulto , Criança , Feminino , Humanos , Itália , Masculino , Estados Unidos
17.
Front Psychol ; 5: 830, 2014.
Artigo em Inglês | MEDLINE | ID: mdl-25191283

RESUMO

Many sign languages display crosslinguistic consistencies in the use of two iconic aspects of handshape, handshape type and finger group complexity. Handshape type is used systematically in form-meaning pairings (morphology): Handling handshapes (Handling-HSs), representing how objects are handled, tend to be used to express events with an agent ("hand-as-hand" iconicity), and Object handshapes (Object-HSs), representing an object's size/shape, are used more often to express events without an agent ("hand-as-object" iconicity). Second, in the distribution of meaningless properties of form (morphophonology), Object-HSs display higher finger group complexity than Handling-HSs. Some adult homesigners, who have not acquired a signed or spoken language and instead use a self-generated gesture system, exhibit these two properties as well. This study illuminates the development over time of both phenomena for one child homesigner, "Julio," age 7;4 (years; months) to 12;8. We elicited descriptions of events with and without agents to determine whether morphophonology and morphosyntax can develop without linguistic input during childhood, and whether these structures develop together or independently. Within the time period studied: (1) Julio used handshape type differently in his responses to vignettes with and without an agent; however, he did not exhibit the same pattern that was found previously in signers, adult homesigners, or gesturers: while he was highly likely to use a Handling-HS for events with an agent (82%), he was less likely to use an Object-HS for non-agentive events (49%); i.e., his productions were heavily biased toward Handling-HSs; (2) Julio exhibited higher finger group complexity in Object- than in Handling-HSs, as in the sign language and adult homesigner groups previously studied; and (3) these two dimensions of language developed independently, with phonological structure showing a sign language-like pattern at an earlier age than morphosyntactic structure. We conclude that iconicity alone is not sufficient to explain the development of linguistic structure in homesign systems. Linguistic input is not required for some aspects of phonological structure to emerge in childhood, and while linguistic input is not required for morphology either, it takes time to emerge in homesign.

18.
Front Psychol ; 5: 560, 2014.
Artigo em Inglês | MEDLINE | ID: mdl-24959158

RESUMO

Productivity-the hallmark of linguistic competence-is typically attributed to algebraic rules that support broad generalizations. Past research on spoken language has documented such generalizations in both adults and infants. But whether algebraic rules form part of the linguistic competence of signers remains unknown. To address this question, here we gauge the generalization afforded by American Sign Language (ASL). As a case study, we examine reduplication (X→XX)-a rule that, inter alia, generates ASL nouns from verbs. If signers encode this rule, then they should freely extend it to novel syllables, including ones with features that are unattested in ASL. And since reduplicated disyllables are preferred in ASL, such a rule should favor novel reduplicated signs. Novel reduplicated signs should thus be preferred to nonreduplicative controls (in rating), and consequently, such stimuli should also be harder to classify as nonsigns (in the lexical decision task). The results of four experiments support this prediction. These findings suggest that the phonological knowledge of signers includes powerful algebraic rules. The convergence between these conclusions and previous evidence for phonological rules in spoken language suggests that the architecture of the phonological mind is partly amodal.

19.
Lang Learn Dev ; 9(2): 130-150, 2013 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-23671406

RESUMO

Handshape works differently in nouns vs. a class of verbs in American Sign Language (ASL), and thus can serve as a cue to distinguish between these two word classes. Handshapes representing characteristics of the object itself (object handshapes) and handshapes representing how the object is handled (handling handshapes) appear in both nouns and a particular type of verb, classifier predicates, in ASL. When used as nouns, object and handling handshapes are phonemic-that is, they are specified in dictionary entries and do not vary with grammatical context. In contrast, when used as classifier predicates, object and handling handshapes do vary with grammatical context for both morphological and syntactic reasons. We ask here when young deaf children learning ASL acquire the word class distinction signaled by handshape. Specifically, we determined the age at which children systematically vary object vs. handling handshapes as a function of grammatical context in classifier predicates, but not in the nouns that accompany those predicates. We asked 4-6 year old children, 7-10 year old children, and adults, all of whom were native ASL signers, to describe a series of vignettes designed to elicit object and handling handshapes in both nouns and classifier predicates. We found that all of the children behaved like adults with respect to all nouns, systematically varying object and handling handshapes as a function of type of item and not grammatical context. The children also behaved like adults with respect to certain classifiers, systematically varying handshape type as a function of grammatical context for items whose nouns have handling handshapes. The children differed from adults in that they did not systematically vary handshape as a function of grammatical context for items whose nouns have object handshapes. These findings extend previous work by showing that children require developmental time to acquire the full morphological system underlying classifier predicates in sign language, just as children acquiring complex morphology in spoken languages do. In addition, we show for the first time that children acquiring ASL treat object and handling handshapes differently as a function of their status as nouns vs. classifier predicates, and thus display a distinction between these word classes as early as 4 years of age.

20.
PLoS One ; 8(4): e60617, 2013.
Artigo em Inglês | MEDLINE | ID: mdl-23573272

RESUMO

All spoken languages encode syllables and constrain their internal structure. But whether these restrictions concern the design of the language system, broadly, or speech, specifically, remains unknown. To address this question, here, we gauge the structure of signed syllables in American Sign Language (ASL). Like spoken languages, signed syllables must exhibit a single sonority/energy peak (i.e., movement). Four experiments examine whether this restriction is enforced by signers and nonsigners. We first show that Deaf ASL signers selectively apply sonority restrictions to syllables (but not morphemes) in novel ASL signs. We next examine whether this principle might further shape the representation of signed syllables by nonsigners. Absent any experience with ASL, nonsigners used movement to define syllable-like units. Moreover, the restriction on syllable structure constrained the capacity of nonsigners to learn from experience. Given brief practice that implicitly paired syllables with sonority peaks (i.e., movement)--a natural phonological constraint attested in every human language--nonsigners rapidly learned to selectively rely on movement to define syllables and they also learned to partly ignore it in the identification of morpheme-like units. Remarkably, nonsigners failed to learn an unnatural rule that defines syllables by handshape, suggesting they were unable to ignore movement in identifying syllables. These findings indicate that signed and spoken syllables are subject to a shared phonological restriction that constrains phonological learning in a new modality. These conclusions suggest the design of the phonological system is partly amodal.


Assuntos
Língua de Sinais , Fala , Humanos , Idioma , Aprendizagem Verbal
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...