Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 13 de 13
Filtrar
Más filtros










Base de datos
Intervalo de año de publicación
1.
Neurobiol Lang (Camb) ; 4(2): 361-381, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-37546690

RESUMEN

Letter recognition plays an important role in reading and follows different phases of processing, from early visual feature detection to the access of abstract letter representations. Deaf ASL-English bilinguals experience orthography in two forms: English letters and fingerspelling. However, the neurobiological nature of fingerspelling representations, and the relationship between the two orthographies, remains unexplored. We examined the temporal dynamics of single English letter and ASL fingerspelling font processing in an unmasked priming paradigm with centrally presented targets for 200 ms preceded by 100 ms primes. Event-related brain potentials were recorded while participants performed a probe detection task. Experiment 1 examined English letter-to-letter priming in deaf signers and hearing non-signers. We found that English letter recognition is similar for deaf and hearing readers, extending previous findings with hearing readers to unmasked presentations. Experiment 2 examined priming effects between English letters and ASL fingerspelling fonts in deaf signers only. We found that fingerspelling fonts primed both fingerspelling fonts and English letters, but English letters did not prime fingerspelling fonts, indicating a priming asymmetry between letters and fingerspelling fonts. We also found an N400-like priming effect when the primes were fingerspelling fonts which might reflect strategic access to the lexical names of letters. The studies suggest that deaf ASL-English bilinguals process English letters and ASL fingerspelling differently and that the two systems may have distinct neural representations. However, the fact that fingerspelling fonts can prime English letters suggests that the two orthographies may share abstract representations to some extent.

2.
J Deaf Stud Deaf Educ ; 27(4): 355-372, 2022 09 15.
Artículo en Inglés | MEDLINE | ID: mdl-35775152

RESUMEN

The lexical quality hypothesis proposes that the quality of phonological, orthographic, and semantic representations impacts reading comprehension. In Study 1, we evaluated the contributions of lexical quality to reading comprehension in 97 deaf and 98 hearing adults matched for reading ability. While phonological awareness was a strong predictor for hearing readers, for deaf readers, orthographic precision and semantic knowledge, not phonology, predicted reading comprehension (assessed by two different tests). For deaf readers, the architecture of the reading system adapts by shifting reliance from (coarse-grained) phonological representations to high-quality orthographic and semantic representations. In Study 2, we examined the contribution of American Sign Language (ASL) variables to reading comprehension in 83 deaf adults. Fingerspelling (FS) and ASL comprehension skills predicted reading comprehension. We suggest that FS might reinforce orthographic-to-semantic mappings and that sign language comprehension may serve as a linguistic basis for the development of skilled reading in deaf signers.


Asunto(s)
Sordera , Lengua de Signos , Adulto , Comprensión , Humanos , Lectura , Semántica
3.
Behav Res Methods ; 54(5): 2502-2521, 2022 10.
Artículo en Inglés | MEDLINE | ID: mdl-34918219

RESUMEN

Picture-naming tasks provide critical data for theories of lexical representation and retrieval and have been performed successfully in sign languages. However, the specific influences of lexical or phonological factors and stimulus properties on sign retrieval are poorly understood. To examine lexical retrieval in American Sign Language (ASL), we conducted a timed picture-naming study using 524 pictures (272 objects and 251 actions). We also compared ASL naming with previous data for spoken English for a subset of 425 pictures. Deaf ASL signers named object pictures faster and more consistently than action pictures, as previously reported for English speakers. Lexical frequency, iconicity, better name agreement, and lower phonological complexity each facilitated naming reaction times (RT)s. RTs were also faster for pictures named with shorter signs (measured by average response duration). Target name agreement was higher for pictures with more iconic and shorter ASL names. The visual complexity of pictures slowed RTs and decreased target name agreement. RTs and target name agreement were correlated for ASL and English, but agreement was lower for ASL, possibly due to the English bias of the pictures. RTs were faster for ASL, which we attributed to a smaller lexicon. Overall, the results suggest that models of lexical retrieval developed for spoken languages can be adopted for signed languages, with the exception that iconicity should be included as a factor. The open-source picture-naming data set for ASL serves as an important, first-of-its-kind resource for researchers, educators, or clinicians for a variety of research, instructional, or assessment purposes.


Asunto(s)
Nombres , Lengua de Signos , Humanos , Lingüística , Lenguaje , Tiempo de Reacción/fisiología
4.
J Deaf Stud Deaf Educ ; 26(2): 263-277, 2021 03 17.
Artículo en Inglés | MEDLINE | ID: mdl-33598676

RESUMEN

ASL-LEX is a publicly available, large-scale lexical database for American Sign Language (ASL). We report on the expanded database (ASL-LEX 2.0) that contains 2,723 ASL signs. For each sign, ASL-LEX now includes a more detailed phonological description, phonological density and complexity measures, frequency ratings (from deaf signers), iconicity ratings (from hearing non-signers and deaf signers), transparency ("guessability") ratings (from non-signers), sign and videoclip durations, lexical class, and more. We document the steps used to create ASL-LEX 2.0 and describe the distributional characteristics for sign properties across the lexicon and examine the relationships among lexical and phonological properties of signs. Correlation analyses revealed that frequent signs were less iconic and phonologically simpler than infrequent signs and iconic signs tended to be phonologically simpler than less iconic signs. The complete ASL-LEX dataset and supplementary materials are available at https://osf.io/zpha4/ and an interactive visualization of the entire lexicon can be accessed on the ASL-LEX page: http://asl-lex.org/.


Asunto(s)
Sordera , Lengua de Signos , Audición , Humanos , Lingüística , Semántica , Estados Unidos
5.
Biling (Camb Engl) ; 23(3): 473-482, 2020 May.
Artículo en Inglés | MEDLINE | ID: mdl-32733161

RESUMEN

Previous work indicates that 1) adults with native sign language experience produce more manual co-speech gestures than monolingual non-signers, and 2) one year of ASL instruction increases gesture production in adults, but not enough to differentiate them from non-signers. To elucidate these effects, we asked early ASL-English bilinguals, fluent late second language (L2) signers (≥ 10 years of experience signing), and monolingual non-signers to retell a story depicted in cartoon clips to a monolingual partner. Early and L2 signers produced manual gestures at higher rates compared to non-signers, particularly iconic gestures, and used a greater variety of handshapes. These results indicate susceptibility of the co-speech gesture system to modification by extensive sign language experience, regardless of the age of acquisition. L2 signers produced more ASL signs and more handshape varieties than early signers, suggesting less separation between the ASL lexicon and the co-speech gesture system for L2 signers.

6.
Neuropsychologia ; 141: 107414, 2020 04.
Artículo en Inglés | MEDLINE | ID: mdl-32142729

RESUMEN

Previous studies with deaf adults reported reduced N170 waveform asymmetry to visual words, a finding attributed to reduced phonological mapping in left-hemisphere temporal regions compared to hearing adults. An open question remains whether this pattern indeed results from reduced phonological processing or from general neurobiological adaptations in visual processing of deaf individuals. Deaf ASL signers and hearing nonsigners performed a same-different discrimination task with visually presented words, faces, or cars, while scalp EEG time-locked to the onset of the first item in each pair was recorded. For word recognition, the typical left-lateralized N170 in hearing participants and reduced left-sided asymmetry in deaf participants were replicated. The groups did not differ on word discrimination but better orthographic skill was associated with larger N170 in the right hemisphere only for deaf participants. Face recognition was characterized by unique N170 signatures for both groups, and deaf individuals exhibited superior face discrimination performance. Laterality or discrimination performance effects did not generalize to the N170 responses to cars, confirming that deaf signers are not inherently less lateralized in their electrophysiological responses to words and critically, giving support to the phonological mapping hypothesis. P1 was attenuated for deaf participants compared to the hearing, but in both groups, P1 selectively discriminated between highly learned familiar objects - words and faces versus less familiar objects - cars. The distinct electrophysiological signatures to words and faces reflected experience-driven adaptations to words and faces that do not generalize to object recognition.


Asunto(s)
Sordera , Adulto , Electroencefalografía , Lateralidad Funcional , Audición , Humanos , Reconocimiento Visual de Modelos , Lengua de Signos , Percepción Visual
7.
Sign Lang Linguist ; 23(1-2): 96-111, 2020 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-33994844

RESUMEN

Meir's (2010) Double Mapping Constraint (DMC) states the use of iconic signs in metaphors is restricted to signs that preserve the structural correspondence between the articulators and the concrete source domain and between the concrete and metaphorical domains. We investigated ASL signers' comprehension of English metaphors whose translations complied with the DMC (Communication collapsed during the meeting) or violated the DMC (The acid ate the metal). Metaphors were preceded by the ASL translation of the English verb, an unrelated sign, or a still video. Participants made sensibility judgments. Response times (RTs) were faster for DMC-compliant sentences with verb primes compared to unrelated primes or the still baseline. RTs for DMC-violation sentences were longer when preceded by verb primes. We propose the structured iconicity of the ASL verbs primed the semantic features involved in the iconic mapping and these primed semantic features facilitated comprehension of DMC-compliant metaphors and slowed comprehension of DMC-violation metaphors.

8.
Lang Cogn ; 11(2): 208-234, 2019 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-31798755

RESUMEN

Iconicity is often defined as the resemblance between a form and a given meaning, while transparency is defined as the ability to infer a given meaning based on the form. This study examined the influence of knowledge of American Sign Language (ASL) on the perceived iconicity of signs and the relationship between iconicity, transparency (correctly guessed signs), 'perceived transparency' (transparency ratings of the guesses), and 'semantic potential' (the diversity (H index) of guesses). Experiment 1 compared iconicity ratings by deaf ASL signers and hearing non-signers for 991 signs from the ASL-LEX database. Signers and non-signers' ratings were highly correlated; however, the groups provided different iconicity ratings for subclasses of signs: nouns vs. verbs, handling vs. entity, and one- vs. two-handed signs. In Experiment 2, non-signers guessed the meaning of 430 signs and rated them for how transparent their guessed meaning would be for others. Only 10% of guesses were correct. Iconicity ratings correlated with transparency (correct guesses), perceived transparency ratings, and semantic potential (H index). Further, some iconic signs were perceived as non-transparent and vice versa. The study demonstrates that linguistic knowledge mediates perceived iconicity distinctly from gesture and highlights critical distinctions between iconicity, transparency (perceived and objective), and semantic potential.

9.
J Deaf Stud Deaf Educ ; 23(4): 399-407, 2018 10 01.
Artículo en Inglés | MEDLINE | ID: mdl-29733368

RESUMEN

This study investigated the impact of language modality and age of acquisition on semantic fluency in American Sign Language (ASL) and English. Experiment 1 compared semantic fluency performance (e.g., name as many animals as possible in 1 min) for deaf native and early ASL signers and hearing monolingual English speakers. The results showed similar fluency scores in both modalities when fingerspelled responses were included for ASL. Experiment 2 compared ASL and English fluency scores in hearing native and late ASL-English bilinguals. Semantic fluency scores were higher in English (the dominant language) than ASL (the non-dominant language), regardless of age of ASL acquisition. Fingerspelling was relatively common in all groups of signers and was used primarily for low-frequency items. We conclude that semantic fluency is sensitive to language dominance and that performance can be compared across the spoken and signed modality, but fingerspelled responses should be included in ASL fluency scores.


Asunto(s)
Lengua de Signos , Adulto , Aptitud , Femenino , Humanos , Lenguaje , Masculino , Multilingüismo , Personas con Deficiencia Auditiva , Semántica
10.
Appl Psycholinguist ; 39(5): 961-987, 2018 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-31595097

RESUMEN

American Sign Language (ASL) and English differ in linguistic resources available to express visual-spatial information. In a referential communication task, we examined the effect of language modality on the creation and mutual acceptance of reference to non-nameable figures. In both languages, description times reduced over iterations and references to the figures' geometric properties ("shape-based reference") declined over time in favor of expressions describing the figures' resemblance to nameable objects ("analogy-based reference"). ASL signers maintained a preference for shape-based reference until the final (sixth) round, while English speakers transitioned toward analogy-based reference by Round 3. Analogy-based references were more time efficient (associated with shorter round description times). Round completion times were longer for ASL than for English, possibly due to gaze demands of the task and/or to more shape-based descriptions. Signers' referring expressions remained unaffected by figure complexity while speakers preferred analogy-based expressions for complex figures and shape-based expressions for simple figures. Like speech, co-speech gestures decreased over iterations. Gestures primarily accompanied shape-based references, but listeners rarely looked at these gestures, suggesting that they were recruited to aid the speaker rather than the addressee. Overall, different linguistic resources (classifier constructions vs. geometric vocabulary) imposed distinct demands on referring strategies in ASL and English.

11.
Neuropsychologia ; 106: 298-309, 2017 Nov.
Artículo en Inglés | MEDLINE | ID: mdl-28986268

RESUMEN

The temporo-occipitally distributed N170 ERP component is hypothesized to reflect print-tuning in skilled readers. This study investigated whether skilled deaf and hearing readers (matched on reading ability, but not phonological awareness) exhibit similar N170 patterns, given their distinct experiences learning to read. Thirty-two deaf and 32 hearing adults viewed words and symbol strings in a familiarity judgment task. In the N170 epoch (120-240ms) hearing readers produced greater negativity for words than symbols at left hemisphere (LH) temporo-parietal and occipital sites, while deaf readers only showed this asymmetry at occipital sites. Linear mixed effects regression was used to examine the influence of continuous measures of reading, spelling, and phonological skills on the N170 (120-240ms). For deaf readers, better reading ability was associated with a larger N170 over the right hemisphere (RH), but for hearing readers better reading ability was associated with a smaller RH N170. Better spelling ability was related to larger occipital N170s in deaf readers, but this relationship was weak in hearing readers. Better phonological awareness was associated with smaller N170s in the LH for hearing readers, but this association was weaker and in the RH for deaf readers. The results support the phonological mapping hypothesis for a left-lateralized temporo-parietal N170 in hearing readers and indicate that skilled reading is characterized by distinct patterns of neural tuning to print in deaf and hearing adults.


Asunto(s)
Encéfalo/fisiopatología , Sordera/fisiopatología , Potenciales Evocados , Lectura , Adolescente , Adulto , Comprensión , Electroencefalografía , Femenino , Lateralidad Funcional , Audición , Humanos , Masculino , Persona de Mediana Edad , Personas con Deficiencia Auditiva , Fonética , Adulto Joven
12.
Behav Res Methods ; 49(2): 784-801, 2017 04.
Artículo en Inglés | MEDLINE | ID: mdl-27193158

RESUMEN

ASL-LEX is a lexical database that catalogues information about nearly 1,000 signs in American Sign Language (ASL). It includes the following information: subjective frequency ratings from 25-31 deaf signers, iconicity ratings from 21-37 hearing non-signers, videoclip duration, sign length (onset and offset), grammatical class, and whether the sign is initialized, a fingerspelled loan sign, or a compound. Information about English translations is available for a subset of signs (e.g., alternate translations, translation consistency). In addition, phonological properties (sign type, selected fingers, flexion, major and minor location, and movement) were coded and used to generate sub-lexical frequency and neighborhood density estimates. ASL-LEX is intended for use by researchers, educators, and students who are interested in the properties of the ASL lexicon. An interactive website where the database can be browsed and downloaded is available at http://asl-lex.org .


Asunto(s)
Bases de Datos Factuales , Lengua de Signos , Adulto , Femenino , Humanos , Lenguaje , Masculino , Traducciones , Estados Unidos , Adulto Joven
13.
J Deaf Stud Deaf Educ ; 22(1): 72-87, 2017 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-27789552

RESUMEN

We conducted three immediate serial recall experiments that manipulated type of stimulus presentation (printed or fingerspelled words) and word similarity (speech-based or manual). Matched deaf American Sign Language signers and hearing non-signers participated (mean reading age = 14-15 years). Speech-based similarity effects were found for both stimulus types indicating that deaf signers recoded both printed and fingerspelled words into a speech-based phonological code. A manual similarity effect was not observed for printed words indicating that print was not recoded into fingerspelling (FS). A manual similarity effect was observed for fingerspelled words when similarity was based on joint angles rather than on handshape compactness. However, a follow-up experiment suggested that the manual similarity effect was due to perceptual confusion at encoding. Overall, these findings suggest that FS is strongly linked to English phonology for deaf adult signers who are relatively skilled readers. This link between fingerspelled words and English phonology allows for the use of a more efficient speech-based code for retaining fingerspelled words in short-term memory and may strengthen the representation of English vocabulary.


Asunto(s)
Sordera/psicología , Memoria a Corto Plazo/fisiología , Lengua de Signos , Adolescente , Adulto , Concienciación/fisiología , Estudios de Casos y Controles , Femenino , Dedos , Humanos , Masculino , Persona de Mediana Edad , Pruebas Psicológicas , Lectura , Reconocimiento en Psicología/fisiología , Habla/fisiología , Adulto Joven
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...