Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 18 de 18
Filter
Add more filters










Publication year range
1.
Cortex ; 135: 240-254, 2021 02.
Article in English | MEDLINE | ID: mdl-33401098

ABSTRACT

There is strong evidence that neuronal bases for language processing are remarkably similar for sign and spoken languages. However, as meanings and linguistic structures of sign languages are coded in movement and space and decoded through vision, differences are also present, predominantly in occipitotemporal and parietal areas, such as superior parietal lobule (SPL). Whether the involvement of SPL reflects domain-general visuospatial attention or processes specific to sign language comprehension remains an open question. Here we conducted two experiments to investigate the role of SPL and the laterality of its engagement in sign language lexical processing. First, using unique longitudinal and between-group designs we mapped brain responses to sign language in hearing late learners and deaf signers. Second, using transcranial magnetic stimulation (TMS) in both groups we tested the behavioural relevance of SPL's engagement and its lateralisation during sign language comprehension. SPL activation in hearing participants was observed in the right hemisphere before and bilaterally after the sign language course. Additionally, after the course hearing learners exhibited greater activation in the occipital cortex and left SPL than deaf signers. TMS applied to the right SPL decreased accuracy in both hearing learners and deaf signers. Stimulation of the left SPL decreased accuracy only in hearing learners. Our results suggest that right SPL might be involved in visuospatial attention while left SPL might support phonological decoding of signs in non-proficient signers.


Subject(s)
Deafness , Sign Language , Humans , Magnetic Resonance Imaging , Parietal Lobe/diagnostic imaging , Transcranial Magnetic Stimulation
2.
Cognition ; 68(3): 221-46, 1998 Sep.
Article in English | MEDLINE | ID: mdl-9852666

ABSTRACT

American sign language (ASL) uses space itself to encode spatial information. Spatial scenes are most often described from the perspective of the person signing (the 'narrator'), such that the viewer must perform what amounts to a 180 degrees mental rotation to correctly comprehend the description. But scenes can also be described, non-canonically, from the viewer's perspective, in which case no rotation is required. Is mental rotation during sign language processing difficult for ASL signers? Are there differences between linguistic and non-linguistic mental rotation? Experiment 1 required subjects to decide whether a signed description matched a room presented on videotape. Deaf ASL signers were more accurate when viewing scenes described from the narrator's perspective (even though rotation is required) than from the viewer's perspective (no rotation required). In Experiment 2, deaf signers and hearing non-signers viewed videotapes of objects appearing briefly and sequentially on a board marked with an entrance. This board either matched an identical board in front of the subject or was rotated 180 degrees. Subjects were asked to place objects on their board in the orientation and location shown on the video, making the appropriate rotation when required. All subjects were significantly less accurate when rotation was required, but ASL signers performed significantly better than hearing non-signers under rotation. ASL signers were also more accurate in remembering object orientation. Signers then viewed a video in which the same scenes were signed from the two perspectives (i.e. rotation required or no rotation required). In contrast to their performance with real objects, signers did not show the typical mental rotation effect. Males outperformed females on the rotation task with objects, but the superiority disappeared in the linguistic condition. We discuss the nature of the ASL mental rotation transformation, and we conclude that habitual use of ASL can enhance non-linguistic cognitive processes thus providing evidence for (a form of) the linguistic relativity hypothesis.


Subject(s)
Mental Processes/physiology , Rotation , Sign Language , Adult , Deafness , Female , Humans , Linguistics , Male , United States
3.
Mem Cognit ; 26(3): 584-90, 1998 May.
Article in English | MEDLINE | ID: mdl-9610126

ABSTRACT

We report a sign length effect in deaf users of American Sign Language that is analogous to the word length effect for speech. Lists containing long signs (signs that traverse relatively long distances) produced poorer memory performance than did lists of short signs (signs that do not change in location). Further, this length effect was eliminated by articulatory suppression (repetitive motion of the hands), and articulatory suppression produced an overall drop in performance. The pattern of results, together with previous findings (Wilson & Emmorey, 1997), provides evidence for a working memory system for sign language that consists of a phonological storage buffer and an articulatory rehearsal mechanism. This indicates a close equivalence of structure between working memory for sign language and working memory for speech. The implications of this equivalence are discussed.


Subject(s)
Mental Recall , Semantics , Sign Language , Verbal Learning , Adult , Attention , Deafness/psychology , Female , Humans , Male , Phonetics , Retention, Psychology
4.
J Deaf Stud Deaf Educ ; 2(3): 121-30, 1997.
Article in English | MEDLINE | ID: mdl-15579841

ABSTRACT

Traditionally, working memory has been divided into two major domains: verbal and visuo-spatial. The verbal domain of working memory can be characterized either by its relationship to language or by its grounding in auditory processing. Because of this ambiguity, languages that are not auditory and vocal (i.e., signed languages) pose a challenge to this conception of working memory. We describe several experiments with deaf users of American Sign Language (ASL) that explore the extent to which the architecture of working memory is determined by the constraints of auditory and visual processing and the extent to which it is determined by the characteristics of language. Various working memory effects were investigated: phonological similarity, word length, and articulatory suppression. The pattern of evidence strongly supports the existence of a sign-based 'rehearsal loop' mechanism parallel to the speech-based rehearsal loop. However, we also discuss evidence pointing to differences between the speech loop and the sign loop from forward and backward digit span tasks with deaf and hearing subjects. Despite their similarities based on linguistic properties, the speech loop and the sign loop appear to diverge due to the differing processing demands of audition and vision. Overall, the results suggest that the architecture of working memory is shaped both by the properties of language structure and by the constraints imposed by sensorimotor modality.

5.
Mem Cognit ; 25(3): 313-20, 1997 May.
Article in English | MEDLINE | ID: mdl-9184483

ABSTRACT

In two experiments, the question of whether working memory could support an articulatory rehearsal loop in the visuospatial domain was investigated. Deaf subjects fluent in American Sign Language (ASL) were tested on immediate serial recall. In Experiment 1, with ASL stimuli, evidence was found for manual motoric coding (worse recall under articulatory suppression) and previous findings of ASL-based phonological coding (worse recall for phonologically similar lists) were replicated [corrected]. The two effects did not interact, suggesting separate components which both contribute to performance. Stimuli in Experiment 2 were namable pictures, which had to be recoded for ASL-based rehearsal to occur. Under these conditions, articulatory suppression eliminated the phonological similarity effect. Thus, an articulatory process seems to be used in translating pictures into a phonological code for memory maintenance. These results indicate a configuration of components similar to the phonological loop for speech, suggesting that working memory can develop a language-based rehearsal loop in the visuospatial modality.


Subject(s)
Form Perception , Memory/physiology , Phonetics , Proprioception , Psychomotor Performance/physiology , Serial Learning , Sign Language , Adolescent , Adult , Analysis of Variance , Humans
6.
Brain Lang ; 57(3): 285-308, 1997 May.
Article in English | MEDLINE | ID: mdl-9126418

ABSTRACT

ERPs were recorded from deaf and hearing native signers and from hearing subjects who acquired ASL late or not at all as they viewed ASL signs that formed sentences. The results were compared across these groups and with those from hearing subjects reading English sentences. The results suggest that there are constraints on the organization of the neural systems that mediate formal languages and that these are independent of the modality through which language is acquired. These include different specializations of anterior and posterior cortical regions in aspects of grammatical and semantic processing and a bias for the left hemisphere to mediate aspects of mnemonic functions in language. Additionally, the results suggest that the nature and timing of sensory and language experience significantly impact the development of the language systems of the brain. Effects of the early acquisition of ASL include an increased role for the right hemisphere and for parietal cortex and this occurs in both hearing and deaf native signers. An increased role of posterior temporal and occipital areas occurs in deaf native signers only and thus may be attributable to auditory deprivation.


Subject(s)
Occipital Lobe/physiology , Parietal Lobe/physiology , Sign Language , Temporal Lobe/physiology , Verbal Learning , Adult , Age Factors , Deafness , Evoked Potentials , Functional Laterality , Hearing , Humans , Male , Semantics
7.
J Deaf Stud Deaf Educ ; 2(4): 212-22, 1997.
Article in English | MEDLINE | ID: mdl-15579849

ABSTRACT

Several previous studies have shown that ASL signers are 'experts' on at least one test of face processing: the Benton Test of Face Recognition, a discrimination task that requires subjects to select a target face from a set of faces shown in profile and/or in shadow. The experiments reported here were designed to discover why ASL signers have superior skill as measured by this test and to investigate whether enhanced performance extends to other aspects of face processing. Experiment 1 indicated that the enhancement in face-processing skills does not extend to recognition of faces from memory. Experiment 2 revealed that deaf and hearing subjects do not differ in their gestalt face-processing ability; they perform similarly on a closure test of face perception. Finally, experiment 3 suggested that ASL signers do exhibit a superior ability to detect subtle differences in facial features. This superior performance may be linked both to experience discriminating ASL grammatical facial expression and to experience with lipreading. We conclude that only specific aspects of face processing are enhanced in deaf signers: those skills relevant to detecting local feature configurations that must be generalized over individual faces.

8.
J Deaf Stud Deaf Educ ; 2(4): 223-33, 1997.
Article in English | MEDLINE | ID: mdl-15579850

ABSTRACT

On-line comprehension of American Sign Language (ASL) requires rapid discrimination of linguistic facial expressions. We hypothesized that ASL signers' experience discriminating linguistic facial expressions might lead to enhanced performance for discriminating among different faces. Five experiments are reported that investigate signers' and non-signers' ability to discriminate human faces photographed under different conditions of orientation and lighting (the Benton Test of Facial Recognition). The results showed that deaf signers performed significantly better than hearing non-signers. Hearing native signers (born to deaf parents) also performed better than hearing nonsigners, suggesting that the enhanced performance of deaf signers is linked to experience with ASL rather than to auditory deprivation. Deaf signers who acquired ASL in early adulthood did not differ from native signers, which suggests that there is no 'critical period' during which signers must be exposed to ASL in order to exhibit enhanced face discrimination abilities. When the faces were inverted, signing and nonsigning groups did not differ in performance. This pattern of results suggests that experience with sign language affects mechanisms specific to face processing and does not produce a general enhancement of visual discrimination. Finally, a similar pattern of results was found with signing and nonsigning children, 6-9 years old. Overall, the results suggest that the brain mechanisms responsible for face processing are somewhat plastic and can be affected by experience. We discuss implications of these results for the relation between language and cognition.

9.
Brain Cogn ; 32(1): 28-44, 1996 Oct.
Article in English | MEDLINE | ID: mdl-8899213

ABSTRACT

Deaf subjects who use American Sign Language as their primary language generated visual mental images faster than hearing nonsigning subjects when stimuli were initially presented to the right hemisphere. Deaf subjects exhibited a strong right hemisphere advantage for image generation using either categorical or coordinate spatial relations representations. In contrast, hearing subjects showed evidence of left hemisphere processing for categorical spatial relations representations, and no hemispheric asymmetry for coordinate spatial relations representations. The enhanced right hemisphere image generation abilities observed in deaf singers may be linked to a stronger right hemisphere involvement in processing imageable signs and linguistically encoded spatial relations.


Subject(s)
Deafness/psychology , Dominance, Cerebral , Imagination , Sign Language , Adult , Brain/physiology , Female , Functional Laterality , Hearing , Humans , Male , Reaction Time
10.
Neuropsychologia ; 31(7): 645-53, 1993 Jul.
Article in English | MEDLINE | ID: mdl-8371838

ABSTRACT

American Sign Language (ASL) exhibits properties for which both hemispheres in hearing people show specialized functioning (linguistic vs spatial). To determine the laterality of processing ASL in the normal intact brain, ASL signs and nonsigns were presented to each visual half field of deaf signers for lexical decision. The English glosses for these signs were presented to hearing English speakers along with nonwords. Deaf ASL signers and hearing English speakers both showed a left hemisphere advantage for abstract lexical items. ASL signers showed a significant right hemisphere advantage for imageable signs, whereas English speakers exhibited no visual field effect for imageable words. This difference in brain laterality may reflect differences in the role of imagery in the two languages.


Subject(s)
Concept Formation , Deafness/psychology , Imagination , Sign Language , Visual Perception , Adult , Deafness/rehabilitation , Female , Humans , Male , Mental Recall , Reaction Time , Semantics
11.
J Psycholinguist Res ; 22(2): 153-87, 1993 Mar.
Article in English | MEDLINE | ID: mdl-8366475

ABSTRACT

American Sign Language (ASL) has evolved within a completely different biological medium, using the hands and face rather than the vocal tract and perceived by eye rather than by ear. The research reviewed in this article addresses the consequences of this different modality for language processing, linguistic structure, and spatial cognition. Language modality appears to affect aspects of lexical recognition and the nature of the grammatical form used for reference. Select aspects of nonlinguistic spatial cognition (visual imagery and face discrimination) appear to be enhanced in deaf and hearing ASL signers. It is hypothesized that this enhancement is due to experience with a visual-spatial language and is tied to specific linguistic processing requirements (interpretation of grammatical facial expression, perspective transformations, and the use of topographic classifiers). In addition, adult deaf signers differ in the age at which they were first exposed to ASL during childhood. The effect of late acquisition of language on linguistic processing is investigated in several studies. The results show selective effects of late exposure to ASL on language processing, independent of grammatical knowledge.


Subject(s)
Sign Language , Age Factors , Animals , Cognition , Deafness , Female , Humans , Language , Male , Psycholinguistics , Semantics , Space Perception , Visual Perception , Vocabulary
12.
Cognition ; 46(2): 139-81, 1993 Feb.
Article in English | MEDLINE | ID: mdl-8432094

ABSTRACT

The ability to generate visual mental images, to maintain them, and to rotate them was studied in deaf signers of American Sign Language (ASL), hearing signers who have deaf parents, and hearing non-signers. These abilities are hypothesized to be integral to the production and comprehension of ASL. Results indicate that both deaf and hearing ASL signers have an enhanced ability to generate relatively complex images and to detect mirror image reversals. In contrast, there were no group differences in ability to maintain information in images for brief periods or to imagine objects rotating. Signers' enhanced visual imagery abilities may be tied to specific linguistic requirements of ASL (referent visualization, topological classifiers, perspective shift, and reversals during sign perception).


Subject(s)
Aptitude , Deafness/psychology , Imagination , Orientation , Sign Language , Visual Perception , Adult , Attention , Discrimination Learning , Female , Humans , Male
13.
Brain Lang ; 43(4): 747-63, 1992 Nov.
Article in English | MEDLINE | ID: mdl-1483200

ABSTRACT

The present study investigates Broca's aphasics' sensitivity to morphological information in an on-line task. German is used as the test language because it is highly inflected. Results from two word monitoring experiments show first that Broca's patients like normal controls are sensitive to the presence of a contextually incorrect inflection. Contrary to normals, they are, however, not sensitive to the absence of an obligatory inflection even when its presence is syntactically highly constrained. Second, they reveal that Broca's aphasics are only sensitive to the presence of an incorrect inflection when it functions as a marker of lexical category (noun vs. verb) and not when it functions as a diacritical marker (second person singular vs. third person singular). The results are taken as evidence for the claim that Broca's aphasics are impaired in the ability to process the full syntactic information encoded in closed class elements in a fast, automatic, and obligatory way.


Subject(s)
Aphasia, Broca/diagnosis , Language Tests , Adult , Aged , Aphasia, Broca/psychology , Female , Functional Laterality , Humans , Male , Middle Aged , Neuropsychological Tests , Phonetics , Research Design
14.
J Psycholinguist Res ; 20(5): 365-88, 1991 Sep.
Article in English | MEDLINE | ID: mdl-1886075

ABSTRACT

Two experiments are reported which investigate the organization and recognition of morphologically complex forms in American Sign Language (ASL) using a repetition priming technique. Three major questions were addressed: (1) Is morphological priming a modality-independent process? (2) Do the different properties of agreement and aspect morphology in ASL affect priming strength? (3) Does early language experience influence the pattern of morphological priming? Prime-target pairs (separated by 26-32 items) were presented to deaf subjects for lexical decision. Primes were inflected for either agreement (dual, reciprocal, multiple) or aspect (habitual, continual); targets were always the base form of the verb. Results of Experiment 1 indicated that subjects exposed to ASL in late childhood were not as sensitive to morphological complexity as native signers, but this result was not replicated in Experiment 2. Both experiments showed stronger facilitation with aspect morphology compared to agreement morphology. Repetition priming was not observed for nonsigns. The scope and structure of the morphological rules for ASL aspect and agreement are argued to explain the different patterns of morphological priming.


Subject(s)
Attention , Deafness/rehabilitation , Language Development Disorders/rehabilitation , Semantics , Sign Language , Adult , Deafness/psychology , Humans , Language Development Disorders/psychology , Paired-Associate Learning , Psycholinguistics
15.
Percept Mot Skills ; 71(3 Pt 2): 1227-52, 1990 Dec.
Article in English | MEDLINE | ID: mdl-2087376

ABSTRACT

Two experiments are reported which investigate lexical recognition in American Sign Language (ASL). Exp. 1 examined identification of monomorphemic signs and investigated how the manipulation of phonological parameters affected sign identification. Over-all sign identification was much faster than what has been found for spoken language. The phonetic structure of sign (the simultaneous availability of Handshape and Location information) and the phonotactics of the ASL lexicon are argued to account for this difference. Exp. 2 compared the time course of recognition for monomorphemic and morphologically complex signs. ASL morphology is largely nonconcatenative which raises particularly interesting questions for word recognition. We found that morphologically complex signs had longer identification times than matched monomorphemic signs. Also, although roots and affixes are often articulated simultaneously in ASL, they were not identified simultaneously. Base forms of morphologically complex signs were identified initially followed by recognition of the morphological inflection. Finally, subjects with deaf parents (Native signers) were able to isolate signs faster than subjects with hearing parents (Late signers). This result suggests that early language experience can influence the initial stages of lexical access and sign identification.


Subject(s)
Deafness/psychology , Mental Recall , Phonetics , Semantics , Sign Language , Adult , Deafness/rehabilitation , Humans
16.
Brain Lang ; 30(2): 305-20, 1987 Mar.
Article in English | MEDLINE | ID: mdl-3567552

ABSTRACT

The ability to comprehend and produce the stress contrast between noun compounds and noun phrases (e.g., greenhouse vs. green house) was examined for 8 nonfluent aphasics, 7 fluent aphasics, 7 right hemisphere damaged (RHD) patients, and 22 normal controls. The aphasics performed worse than normal controls on the comprehension task, and the RHD group performed as well as normals. The ability to produce stress contrasts was tested with a sentence-reading task; acoustic measurements revealed that no nonfluent aphasic used pitch to distinguish noun compounds from phrases, but two used duration. All but one of the RHD patients and all but one of the normals produced pitch and/or duration cues. These results suggest that linguistic prosody is processed by the left hemisphere and that with brain damage the ability to produce pitch and duration cues may be dissociated at the lexical level.


Subject(s)
Cerebral Cortex/physiopathology , Phonetics , Semantics , Speech Disorders/physiopathology , Aged , Aphasia, Broca/physiopathology , Aphasia, Wernicke/physiopathology , Brain Damage, Chronic/physiopathology , Dominance, Cerebral/physiology , Female , Humans , Male , Middle Aged , Neuropsychological Tests , Sound Spectrography , Speech Production Measurement
17.
Brain Lang ; 25(1): 72-86, 1985 May.
Article in English | MEDLINE | ID: mdl-4027568

ABSTRACT

The conversations of two thought-disordered schizophrenic children and two age- and sex-matched normal children were studied in three different contexts. Cohesive relations and retrieval categories were analyzed. The thought-disordered schizophrenic and normal children demonstrated divergent patterns of discourse. These patterns closely paralleled those previously reported for adults by S. Rochester and J. R. Martin (1979, Crazy talk: A study of the discourse of schizophrenic speakers, New York: Plenum) for schizophrenic and normal adults, although some discrepancies were also observed. Recommendations for future research are offered.


Subject(s)
Schizophrenic Language , Child , Female , Humans , Language Development , Male , Schizophrenic Psychology , Thinking
18.
Phonetica ; 42(4): 163-74, 1985.
Article in English | MEDLINE | ID: mdl-3842771

ABSTRACT

Formant frequencies of the semivowels /j/ and /w/ in Amharic, Yoruba and Zuni were measured in three vowel environments. Cross-language differences were found between what are described as the same semivowels, i.e. different languages have different acoustic targets for /j/ and /w/. These cross-language differences in semivowels correlate with cross-language differences in the respective cognate vowels /i/ and /u/. Nonetheless, the semivowels differ in systematic ways from the vowels in directions that make them more 'consonantal'. These languages also differ in their patterns of coarticulation between semivowels and adjacent vowels. This shows, inter alia, that palatal segments differ from language to language in their degree of resistance to coarticulation. Because of these language-specific coarticulatory patterns, cross-language differences in acoustic targets can only be established after careful consideration of the effect of context.


Subject(s)
Cross-Cultural Comparison , Language , Phonetics , Speech Acoustics , Speech , Female , Humans , Indians, North American , Male , Mouth/physiology , Speech/physiology , United States
SELECTION OF CITATIONS
SEARCH DETAIL
...