Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 13 de 13
Filter
Add more filters










Publication year range
1.
J Deaf Stud Deaf Educ ; 23(4): 399-407, 2018 10 01.
Article in English | MEDLINE | ID: mdl-29733368

ABSTRACT

This study investigated the impact of language modality and age of acquisition on semantic fluency in American Sign Language (ASL) and English. Experiment 1 compared semantic fluency performance (e.g., name as many animals as possible in 1 min) for deaf native and early ASL signers and hearing monolingual English speakers. The results showed similar fluency scores in both modalities when fingerspelled responses were included for ASL. Experiment 2 compared ASL and English fluency scores in hearing native and late ASL-English bilinguals. Semantic fluency scores were higher in English (the dominant language) than ASL (the non-dominant language), regardless of age of ASL acquisition. Fingerspelling was relatively common in all groups of signers and was used primarily for low-frequency items. We conclude that semantic fluency is sensitive to language dominance and that performance can be compared across the spoken and signed modality, but fingerspelled responses should be included in ASL fluency scores.


Subject(s)
Sign Language , Adult , Aptitude , Female , Humans , Language , Male , Multilingualism , Persons With Hearing Impairments , Semantics
2.
Biling (Camb Engl) ; 20(1): 42-48, 2017 Jan.
Article in English | MEDLINE | ID: mdl-28785168

ABSTRACT

Many bimodal bilinguals are immersed in a spoken language-dominant environment from an early age and, unlike unimodal bilinguals, do not necessarily divide their language use between languages. Nonetheless, early ASL-English bilinguals retrieved fewer words in a letter fluency task in their dominant language compared to monolingual English speakers with equal vocabulary level. This finding demonstrates that reduced vocabulary size and/or frequency of use cannot completely account for bilingual disadvantages in verbal fluency. Instead, retrieval difficulties likely reflect between-language interference. Furthermore, it suggests that the two languages of bilinguals compete for selection even when they are expressed with distinct articulators.

3.
Acta Psychol (Amst) ; 177: 69-77, 2017 Jun.
Article in English | MEDLINE | ID: mdl-28477456

ABSTRACT

This study investigated the relation between linguistic and spatial working memory (WM) resources and language comprehension for signed compared to spoken language. Sign languages are both linguistic and visual-spatial, and therefore provide a unique window on modality-specific versus modality-independent contributions of WM resources to language processing. Deaf users of American Sign Language (ASL), hearing monolingual English speakers, and hearing ASL-English bilinguals completed several spatial and linguistic serial recall tasks. Additionally, their comprehension of spatial and non-spatial information in ASL and spoken English narratives was assessed. Results from the linguistic serial recall tasks revealed that the often reported advantage for speakers on linguistic short-term memory tasks does not extend to complex WM tasks with a serial recall component. For English, linguistic WM predicted retention of non-spatial information, and both linguistic and spatial WM predicted retention of spatial information. For ASL, spatial WM predicted retention of spatial (but not non-spatial) information, and linguistic WM did not predict retention of either spatial or non-spatial information. Overall, our findings argue against strong assumptions of independent domain-specific subsystems for the storage and processing of linguistic and spatial information and furthermore suggest a less important role for serial encoding in signed than spoken language comprehension.


Subject(s)
Comprehension/physiology , Linguistics , Memory, Short-Term/physiology , Sign Language , Speech , Adult , Female , Hearing , Humans , Male , Mental Recall , Multilingualism , Persons With Hearing Impairments , Young Adult
4.
J Exp Psychol Learn Mem Cogn ; 43(11): 1828-1834, 2017 Nov.
Article in English | MEDLINE | ID: mdl-28333506

ABSTRACT

This study investigated whether language control during language production in bilinguals generalizes across modalities, and to what extent the language control system is shaped by competition for the same articulators. Using a cued language-switching paradigm, we investigated whether switch costs are observed when hearing signers switch between a spoken and a signed language. The results showed an asymmetrical switch cost for bimodal bilinguals on reaction time (RT) and accuracy, with larger costs for the (dominant) spoken language. Our findings suggest important similarities in the mechanisms underlying language selection in bimodal bilinguals and unimodal bilinguals, with competition occurring at multiple levels other than phonology. (PsycINFO Database Record


Subject(s)
Multilingualism , Sign Language , Adult , Female , Humans , Male , Psychological Tests , Reaction Time
5.
Behav Brain Sci ; 40: e56, 2017 01.
Article in English | MEDLINE | ID: mdl-29342517

ABSTRACT

In our commentary, we raise concerns with the idea that location should be considered a gestural component of sign languages. We argue that psycholinguistic studies provide evidence for location as a "categorical" element of signs. More generally, we propose that the use of space in sign languages comes in many flavours and may be both categorical and imagistic. In their target article, Goldin-Meadow & Brentari (G-M&B) discuss several observations suggesting that the use of space is imagistic and may not form part of the categorical properties of sign languages. Specifically, they point out that (1) the number of locations toward which agreeing verbs can be directed is not part of a discrete set, (2) event descriptions by users of different sign languages and hearing nonsigners exhibit marked similarities in the use of space, and (3) location as a phonological parameter is not categorically perceived by native signers. It should be noted that G-M&B acknowledge that categorical properties of location and movement may simply not have been captured yet because the proper investigative tools are not yet readily available.


Subject(s)
Gestures , Sign Language , Humans , Language , Perception , Psycholinguistics
6.
Biling (Camb Engl) ; 19(2): 264-276, 2016 Mar.
Article in English | MEDLINE | ID: mdl-26989347

ABSTRACT

We used picture-word interference (PWI) to discover a) whether cross-language activation at the lexical level can yield phonological priming effects when languages do not share phonological representations, and b) whether semantic interference effects occur without articulatory competition. Bimodal bilinguals fluent in American Sign Language (ASL) and English named pictures in ASL while listening to distractor words that were 1) translation equivalents, 2) phonologically related to the target sign through translation, 3) semantically related, or 4) unrelated. Monolingual speakers named pictures in English. Production of ASL signs was facilitated by words that were phonologically related through translation and by translation equivalents, indicating that cross-language activation spreads from lexical to phonological levels for production. Semantic interference effects were not observed for bimodal bilinguals, providing some support for a post-lexical locus of semantic interference, but which we suggest may instead reflect time course differences in spoken and signed production in the PWI task.

7.
J Deaf Stud Deaf Educ ; 21(2): 213-21, 2016 Apr.
Article in English | MEDLINE | ID: mdl-26657077

ABSTRACT

Semantic and lexical decision tasks were used to investigate the mechanisms underlying code-blend facilitation: the finding that hearing bimodal bilinguals comprehend signs in American Sign Language (ASL) and spoken English words more quickly when they are presented together simultaneously than when each is presented alone. More robust facilitation effects were observed for semantic decision than for lexical decision, suggesting that lexical integration of signs and words within a code-blend occurs primarily at the semantic level, rather than at the level of form. Early bilinguals exhibited greater facilitation effects than late bilinguals for English (the dominant language) in the semantic decision task, possibly because early bilinguals are better able to process early visual cues from ASL signs and use these to constrain English word recognition. Comprehension facilitation via semantic integration of words and signs is consistent with co-speech gesture research demonstrating facilitative effects of gesture integration on language comprehension.


Subject(s)
Comprehension/physiology , Hearing Loss/rehabilitation , Multilingualism , Semantics , Sign Language , Speech Perception/physiology , Adolescent , Adult , Female , Gestures , Humans , Male , Reaction Time , Young Adult
8.
J Child Lang ; 43(2): 310-337, 2016 Mar.
Article in English | MEDLINE | ID: mdl-25994361

ABSTRACT

This study investigates the role of acoustic salience and hearing impairment in learning phonologically minimal pairs. Picture-matching and object-matching tasks were used to investigate the learning of consonant and vowel minimal pairs in five- to six-year-old deaf children with a cochlear implant (CI), and children of the same age with normal hearing (NH). In both tasks, the CI children showed clear difficulties with learning minimal pairs. The NH children also showed some difficulties, however, particularly in the picture-matching task. Vowel minimal pairs were learned more successfully than consonant minimal pairs, particularly in the object-matching task. These results suggest that the ability to encode phonetic detail in novel words is not fully developed at age six and is affected by task demands and acoustic salience. CI children experience persistent difficulties with accurately mapping sound contrasts to novel meanings, but seem to benefit from the relative acoustic salience of vowel sounds.

9.
10.
Biling (Camb Engl) ; 19(2): 223-242, 2016 Mar.
Article in English | MEDLINE | ID: mdl-28804269

ABSTRACT

Bimodal bilinguals, fluent in a signed and a spoken language, exhibit a unique form of bilingualism because their two languages access distinct sensory-motor systems for comprehension and production. Differences between unimodal and bimodal bilinguals have implications for how the brain is organized to control, process, and represent two languages. Evidence from code-blending (simultaneous production of a word and a sign) indicates that the production system can access two lexical representations without cost, and the comprehension system must be able to simultaneously integrate lexical information from two languages. Further, evidence of cross-language activation in bimodal bilinguals indicates the necessity of links between languages at the lexical or semantic level. Finally, the bimodal bilingual brain differs from the unimodal bilingual brain with respect to the degree and extent of neural overlap for the two languages, with less overlap for bimodal bilinguals.

11.
Cognition ; 141: 9-25, 2015 Aug.
Article in English | MEDLINE | ID: mdl-25912892

ABSTRACT

Findings from recent studies suggest that spoken-language bilinguals engage nonlinguistic inhibitory control mechanisms to resolve cross-linguistic competition during auditory word recognition. Bilingual advantages in inhibitory control might stem from the need to resolve perceptual competition between similar-sounding words both within and between their two languages. If so, these advantages should be lessened or eliminated when there is no perceptual competition between two languages. The present study investigated the extent of inhibitory control recruitment during bilingual language comprehension by examining associations between language co-activation and nonlinguistic inhibitory control abilities in bimodal bilinguals, whose two languages do not perceptually compete. Cross-linguistic distractor activation was identified in the visual world paradigm, and correlated significantly with performance on a nonlinguistic spatial Stroop task within a group of 27 hearing ASL-English bilinguals. Smaller Stroop effects (indexing more efficient inhibition) were associated with reduced co-activation of ASL signs during the early stages of auditory word recognition. These results suggest that inhibitory control in auditory word recognition is not limited to resolving perceptual linguistic competition in phonological input, but is also used to moderate competition that originates at the lexico-semantic level.


Subject(s)
Inhibition, Psychological , Multilingualism , Sign Language , Speech Perception/physiology , Adolescent , Adult , Female , Humans , Male , Stroop Test , Young Adult
12.
J Deaf Stud Deaf Educ ; 19(1): 107-25, 2014 Jan.
Article in English | MEDLINE | ID: mdl-24080074

ABSTRACT

The effect of using signed communication on the spoken language development of deaf children with a cochlear implant (CI) is much debated. We report on two studies that investigated relationships between spoken word and sign processing in children with a CI who are exposed to signs in addition to spoken language. Study 1 assessed rapid word and sign learning in 13 children with a CI and found that performance in both language modalities correlated positively. Study 2 tested the effects of using sign-supported speech on spoken word processing in eight children with a CI, showing that simultaneously perceiving signs and spoken words does not negatively impact their spoken word recognition or learning. Together, these two studies suggest that sign exposure does not necessarily have a negative effect on speech processing in some children with a CI.


Subject(s)
Cochlear Implants/psychology , Deafness/psychology , Language Development , Mental Processes/physiology , Sign Language , Speech , Analysis of Variance , Child , Child, Preschool , Deafness/rehabilitation , Female , Humans , Male , Neuropsychological Tests
13.
J Speech Lang Hear Res ; 53(6): 1440-57, 2010 Dec.
Article in English | MEDLINE | ID: mdl-20689031

ABSTRACT

PURPOSE: This study examined the use of different acoustic cues in auditory perception of consonant and vowel contrasts by profoundly deaf children with a cochlear implant (CI) in comparison to age-matched children and young adults with normal hearing. METHOD: A speech sound categorization task in an XAB format was administered to 15 children ages 5-6 with a CI (mean age at implant: 1;8 [years;months]), 20 normal-hearing age-matched children, and 21 normal-hearing adults. Four contrasts were examined: //-/a/, /i/-/i/, /bu/-/pu/, and /fu/-/su/. Measures included phoneme endpoint identification, individual cue reliance, cue weighting, and classification slope. RESULTS: The children with a CI used the spectral cues in the /fu/-/su/ contrast less effectively than the children with normal hearing, resulting in poorer phoneme endpoint identification and a shallower classification slope. Performance on the other 3 contrasts did not differ significantly. Adults consistently showed steeper classification slopes than the children, but similar cue-weighting patterns were observed in all 3 groups. CONCLUSIONS: Despite their different auditory input, children with a CI appear to be able to use many acoustic cues effectively in speech perception. Most importantly, children with a CI and normal-hearing children were observed to use similar cue-weighting patterns.


Subject(s)
Cochlear Implants , Cues , Deafness/rehabilitation , Phonetics , Speech Perception , Acoustic Stimulation/methods , Age Factors , Child , Child, Preschool , Deafness/physiopathology , Female , Humans , Male , Speech Intelligibility
SELECTION OF CITATIONS
SEARCH DETAIL
...