Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 14 de 14
Filter
Add more filters










Publication year range
1.
Sci Rep ; 14(1): 1043, 2024 01 10.
Article in English | MEDLINE | ID: mdl-38200108

ABSTRACT

The impact of adverse listening conditions on spoken language perception is well established, but the role of suboptimal viewing conditions on signed language processing is less clear. Viewing angle, i.e. the physical orientation of a perceiver relative to a signer, varies in many everyday deaf community settings for L1 signers and may impact comprehension. Further, processing from various viewing angles may be more difficult for late L2 learners of a signed language, with less variation in sign input while learning. Using a semantic decision task in a distance priming paradigm, we show that British Sign Language signers are slower and less accurate to comprehend signs shown from side viewing angles, with L2 learners in particular making disproportionately more errors when viewing signs from side angles. We also investigated how individual differences in mental rotation ability modulate processing signs from different angles. Speed and accuracy on the BSL task correlated with mental rotation ability, suggesting that signers may mentally represent signs from a frontal view, and use mental rotation to process signs from other viewing angles. Our results extend the literature on viewpoint specificity in visual recognition to linguistic stimuli. The data suggests that L2 signed language learners should maximise their exposure to diverse signed language input, both in terms of viewing angle and other difficult viewing conditions to maximise comprehension.


Subject(s)
Learning , Sign Language , Humans , Individuality , Linguistics , Physical Examination
2.
Front Psychol ; 13: 932370, 2022.
Article in English | MEDLINE | ID: mdl-36186342

ABSTRACT

Sign language interpreting (SLI) is a cognitively challenging task performed mostly by second language learners (i.e., not raised using a sign language as a home language). SLI students must first gain language fluency in a new visuospatial modality and then move between spoken and signed modalities as they interpret. As a result, many students plateau before reaching working fluency, and SLI training program drop-out rates are high. However, we know little about the requisite skills to become a successful interpreter: the few existing studies investigating SLI aptitude in terms of linguistic and cognitive skills lack baseline measures. Here we report a 3-year exploratory longitudinal skills assessments study with British Sign Language (BSL)-English SLI students at two universities (n = 33). Our aims were two-fold: first, to better understand the prerequisite skills that lead to successful SLI outcomes; second, to better understand how signing and interpreting skills impact other aspects of cognition. A battery of tasks was completed at four time points to assess skills, including but not limited to: multimodal and unimodal working memory, 2-dimensional and 3-dimensional mental rotation (MR), and English comprehension. Dependent measures were BSL and SLI course grades, BSL reproduction tests, and consecutive SLI tasks. Results reveal that initial BSL proficiency and 2D-MR were associated with selection for the degree program, while visuospatial working memory was linked to continuing with the program. 3D-MR improved throughout the degree, alongside some limited gains in auditory, visuospatial, and multimodal working memory tasks. Visuospatial working memory and MR were the skills closest associated with BSL and SLI outcomes, particularly those tasks involving sign language production, thus, highlighting the importance of cognition related to the visuospatial modality. These preliminary data will inform SLI training programs, from applicant selection to curriculum design.

3.
Q J Exp Psychol (Hove) ; 75(9): 1694-1710, 2022 Sep.
Article in English | MEDLINE | ID: mdl-34704887

ABSTRACT

Previous evidence shows that words with implicit spatial meaning or metaphorical spatial associations are perceptually simulated and can guide attention to associated locations (e.g., bird-upward location). In turn, simulated representations interfere with visual perception at an associated location. The present study investigates the effect of spatial associations on short-term verbal recognition memory to disambiguate between modal and amodal accounts of spatial interference effects across two experiments. Participants in both experiments encoded words presented in congruent and incongruent locations. Congruent and incongruent locations were based on an independent norming task. In Experiment 1, an auditorily presented word probed participants' memory as they were visually cued to either the original location of the probe word or a diagonal location at retrieval. In Experiment 2, there was no cue at retrieval but a neutral encoding condition in which words normed to central locations were shown. Results show that spatial associations affected memory performance although spatial information was neither relevant nor necessary for successful retrieval: Words in Experiment 1 were retrieved more accurately when there was a visual cue in the congruent location at retrieval but only if they were encoded in a non-canonical position. A visual cue in the congruent location slowed down memory performance when retrieving highly imageable words. With no cue at retrieval (Experiment 2), participants were better at remembering spatially congruent words as opposed to neutral words. Results provide evidence in support of sensorimotor simulation in verbal memory and a perceptual competition account of spatial interference effect.


Subject(s)
Mental Recall , Recognition, Psychology , Cues , Humans , Memory, Short-Term , Visual Perception
4.
Mem Cognit ; 49(6): 1204-1219, 2021 08.
Article in English | MEDLINE | ID: mdl-33864238

ABSTRACT

English sentences with double center-embedded clauses are read faster when they are made ungrammatical by removing one of the required verb phrases. This phenomenon is known as the missing-VP effect. German and Dutch speakers do not experience the missing-VP effect when reading their native language, but they do when reading English as a second language (L2). We investigate whether the missing-VP effect when reading L2 English occurs in native Dutch speakers because their knowledge of English is similar to that of native English speakers (the high exposure account), or because of the difficulty of L2 reading (the low proficiency account). In an eye-tracking study, we compare the size of the missing-VP effect between native Dutch and native English participants, and across native Dutch participants with varying L2 English proficiency and exposure. Results provide evidence for both accounts, suggesting that both native-like knowledge of English and L2 reading difficulty play a role.


Subject(s)
Language , Multilingualism , Humans , Knowledge , Reading
5.
Psychol Res ; 84(3): 667-684, 2020 Apr.
Article in English | MEDLINE | ID: mdl-30173279

ABSTRACT

People revisit spatial locations of visually encoded information when they are asked to retrieve that information, even when the visual image is no longer present. Such "looking at nothing" during retrieval is likely modulated by memory load (i.e., mental effort to maintain and reconstruct information) and the strength of mental representations. We investigated whether words that are more difficult to remember also lead to more looks to relevant, blank locations. Participants were presented four nouns on a two by two grid. A number of lexico-semantic variables were controlled to form high-difficulty and low-difficulty noun sets. Results reveal more frequent looks to blank locations during retrieval of high-difficulty nouns compared to low-difficulty ones. Mixed-effects modelling demonstrates that imagery-related semantic factors (imageability and concreteness) predict looking at nothing during retrieval. Results provide the first direct evidence that looking at nothing is modulated by word difficulty and in particular, word imageability. Overall, the research provides substantial support to the integrated memory account for linguistic stimuli and looking at nothing as a form of mental imagery.


Subject(s)
Eye Movements/physiology , Imagination/physiology , Language , Memory , Mental Recall/physiology , Semantics , Adolescent , Adult , Female , Humans , Male , Photic Stimulation , Young Adult
6.
Front Psychol ; 9: 1433, 2018.
Article in English | MEDLINE | ID: mdl-30154747

ABSTRACT

Considerable evidence now shows that all languages, signed and spoken, exhibit a significant amount of iconicity. We examined how the visual-gestural modality of signed languages facilitates iconicity for different kinds of lexical meanings compared to the auditory-vocal modality of spoken languages. We used iconicity ratings of hundreds of signs and words to compare iconicity across the vocabularies of two signed languages - American Sign Language and British Sign Language, and two spoken languages - English and Spanish. We examined (1) the correlation in iconicity ratings between the languages; (2) the relationship between iconicity and an array of semantic variables (ratings of concreteness, sensory experience, imageability, perceptual strength of vision, audition, touch, smell and taste); (3) how iconicity varies between broad lexical classes (nouns, verbs, adjectives, grammatical words and adverbs); and (4) between more specific semantic categories (e.g., manual actions, clothes, colors). The results show several notable patterns that characterize how iconicity is spread across the four vocabularies. There were significant correlations in the iconicity ratings between the four languages, including English with ASL, BSL, and Spanish. The highest correlation was between ASL and BSL, suggesting iconicity may be more transparent in signs than words. In each language, iconicity was distributed according to the semantic variables in ways that reflect the semiotic affordances of the modality (e.g., more concrete meanings more iconic in signs, not words; more auditory meanings more iconic in words, not signs; more tactile meanings more iconic in both signs and words). Analysis of the 220 meanings with ratings in all four languages further showed characteristic patterns of iconicity across broad and specific semantic domains, including those that distinguished between signed and spoken languages (e.g., verbs more iconic in ASL, BSL, and English, but not Spanish; manual actions especially iconic in ASL and BSL; adjectives more iconic in English and Spanish; color words especially low in iconicity in ASL and BSL). These findings provide the first quantitative account of how iconicity is spread across the lexicons of signed languages in comparison to spoken languages.

7.
Cognition ; 164: 144-149, 2017 07.
Article in English | MEDLINE | ID: mdl-28427030

ABSTRACT

Unlike the phonological loop in spoken language monitoring, sign language users' own production provides mostly proprioceptive feedback and only minimal visual feedback. Here we investigate whether sign production influences sign comprehension by exploiting hand dominance in a picture-sign matching task performed by left-handed signers and right-handed signers. Should all signers perform better to right-handed input, this would suggest that a frequency effect in sign perception drives comprehension. However, if signers perform better to congruent-handed input, this would implicate the production system's role in comprehension. We found evidence for both hypotheses, with variation dependent on sign type. All signers performed faster to right-handers for phonologically simple, one-handed signs. However, left-handed signers preferred congruent-handed input for phonologically complex, two-handed asymmetrical signs. These results are in line with a weak version of the motor theory of speech perception, where the motor system is only engaged when comprehending complex input.


Subject(s)
Comprehension/physiology , Functional Laterality/physiology , Psychomotor Performance/physiology , Sign Language , Visual Perception/physiology , Adult , Deafness , Female , Humans , Male , Middle Aged , Proprioception/physiology , Young Adult
8.
Neuropsychologia ; 100: 171-194, 2017 06.
Article in English | MEDLINE | ID: mdl-28392303

ABSTRACT

Verbs with multiple senses can show varying argument structure frequencies, depending on the underlying sense. When acknowledge is used to mean 'recognise', it takes a direct object (DO), but when it is used to mean 'admit' it prefers a sentence complement (SC). The purpose of this study was to investigate whether people with aphasia (PWA) can exploit such meaning-structure probabilities during the reading of temporarily ambiguous sentences, as demonstrated for neurologically healthy individuals (NHI) in a self-paced reading study (Hare et al., 2003). Eleven people with mild or moderate aphasia and eleven neurologically healthy control participants read sentences while their eyes were tracked. Using adapted materials from the study by Hare et al. target sentences containing an SC structure (e.g. He acknowledged (that) his friends would probably help him a lot) were presented following a context prime that biased either a direct object (DO-bias) or sentence complement (SC-bias) reading of the verbs. Half of the stimuli sentences did not contain that so made the post verbal noun phrase (his friends) structurally ambiguous. Both groups of participants were influenced by structural ambiguity as well as by the context bias, indicating that PWA can, like NHI, use their knowledge of a verb's sense-based argument structure frequency during online sentence reading. However, the individuals with aphasia showed delayed reading patterns and some individual differences in their sensitivity to context and ambiguity cues. These differences compared to the NHI may contribute to difficulties in sentence comprehension in aphasia.


Subject(s)
Aphasia/physiopathology , Comprehension/physiology , Cues , Semantics , Aged , Aphasia/psychology , Bias , Eye Movements , Female , Humans , Male , Middle Aged , Names , Psycholinguistics , Reading
9.
Behav Res Methods ; 45(4): 1182-90, 2013 Dec.
Article in English | MEDLINE | ID: mdl-23404612

ABSTRACT

We make available word-by-word self-paced reading times and eye-tracking data over a sample of English sentences from narrative sources. These data are intended to form a gold standard for the evaluation of computational psycholinguistic models of sentence comprehension in English. We describe stimuli selection and data collection and present descriptive statistics, as well as comparisons between the two sets of reading times.


Subject(s)
Comprehension , Eye Movements , Language , Reading , Adolescent , Female , Humans , Male , Models, Psychological , Reaction Time , Young Adult
10.
Psychol Sci ; 23(12): 1443-8, 2012 Dec.
Article in English | MEDLINE | ID: mdl-23150275

ABSTRACT

An arbitrary link between linguistic form and meaning is generally considered a universal feature of language. However, iconic (i.e., nonarbitrary) mappings between properties of meaning and features of linguistic form are also widely present across languages, especially signed languages. Although recent research has shown a role for sign iconicity in language processing, research on the role of iconicity in sign-language development has been mixed. In this article, we present clear evidence that iconicity plays a role in sign-language acquisition for both the comprehension and production of signs. Signed languages were taken as a starting point because they tend to encode a higher degree of iconic form-meaning mappings in their lexicons than spoken languages do, but our findings are more broadly applicable: Specifically, we hypothesize that iconicity is fundamental to all languages (signed and spoken) and that it serves to bridge the gap between linguistic form and human experience.


Subject(s)
Language Development , Language , Learning/physiology , Sign Language , Child, Preschool , Female , Humans , Infant , Male , Psycholinguistics/methods , United Kingdom
11.
Psychol Sci ; 21(8): 1158-67, 2010 Aug.
Article in English | MEDLINE | ID: mdl-20644107

ABSTRACT

In contrast to the single-articulatory system of spoken languages, sign languages employ multiple articulators, including the hands and the mouth. We asked whether manual components and mouthing patterns of lexical signs share a semantic representation, and whether their relationship is affected by the differing language experience of deaf and hearing native signers. We used picture-naming tasks and word-translation tasks to assess whether the same semantic effects occur in manual production and mouthing production. Semantic errors on the hands were more common in the English-translation task than in the picture-naming task, but errors in mouthing patterns showed a different trend. We conclude that mouthing is represented and accessed through a largely separable channel, rather than being bundled with manual components in the sign lexicon. Results were comparable for deaf and hearing signers; differences in language experience did not play a role. These results provide novel insight into coordinating different modalities in language production.


Subject(s)
Sign Language , Female , Hand , Humans , Language , Male , Mouth , Persons With Hearing Impairments/psychology , Semantics , United Kingdom , Video Recording , Young Adult
12.
J Exp Psychol Learn Mem Cogn ; 36(4): 1017-27, 2010 Jul.
Article in English | MEDLINE | ID: mdl-20565217

ABSTRACT

Signed languages exploit the visual/gestural modality to create iconic expression across a wide range of basic conceptual structures in which the phonetic resources of the language are built up into an analogue of a mental image (Taub, 2001). Previously, we demonstrated a processing advantage when iconic properties of signs were made salient in a corresponding picture during a picture and sign matching task (Thompson, Vinson, & Vigliocco, 2009). The current study investigates the extent of iconicity effects with a phonological decision task (does the sign involve straight or curved fingers?) in which the meaning of the sign is irrelevant. The results show that iconicity is a significant predictor of response latencies and accuracy, with more iconic signs leading to slower responses and more errors. We conclude that meaning is activated automatically for highly iconic properties of a sign, and this leads to interference in making form-based decisions. Thus, the current study extends previous work by demonstrating that iconicity effects permeate the entire language system, arising automatically even when access to meaning is not needed.


Subject(s)
Comprehension/physiology , Deafness/physiopathology , Decision Making/physiology , Phonetics , Psycholinguistics , Sign Language , Adult , Aged , Female , Humans , Male , Middle Aged , Pattern Recognition, Visual/physiology , Reaction Time/physiology , Young Adult
13.
Front Psychol ; 1: 227, 2010.
Article in English | MEDLINE | ID: mdl-21833282

ABSTRACT

Current views about language are dominated by the idea of arbitrary connections between linguistic form and meaning. However, if we look beyond the more familiar Indo-European languages and also include both spoken and signed language modalities, we find that motivated, iconic form-meaning mappings are, in fact, pervasive in language. In this paper, we review the different types of iconic mappings that characterize languages in both modalities, including the predominantly visually iconic mappings found in signed languages. Having shown that iconic mapping are present across languages, we then proceed to review evidence showing that language users (signers and speakers) exploit iconicity in language processing and language acquisition. While not discounting the presence and importance of arbitrariness in language, we put forward the idea that iconicity need also be recognized as a general property of language, which may serve the function of reducing the gap between linguistic form and conceptual representation to allow the language system to "hook up" to motor, perceptual, and affective experience.

14.
J Exp Psychol Learn Mem Cogn ; 35(2): 550-7, 2009 Mar.
Article in English | MEDLINE | ID: mdl-19271866

ABSTRACT

Signed languages exploit iconicity (the transparent relationship between meaning and form) to a greater extent than spoken languages. where it is largely limited to onomatopoeia. In a picture-sign matching experiment measuring reaction times, the authors examined the potential advantage of iconicity both for 1st- and 2nd-language learners of American Sign Language (ASL). The results show that native ASL signers are faster to respond when a specific property iconically represented in a sign is made salient in the corresponding picture, thus providing evidence that a closer mapping between meaning and form can aid in lexical retrieval. While late 2nd-language learners appear to use iconicity as an aid to learning sign (R. Campbell, P. Martin, & T. White, 1992), they did not show the same facilitation effect as native ASL signers, suggesting that the task tapped into more automatic language processes. Overall, the findings suggest that completely arbitrary mappings between meaning and form may not be more advantageous in language and that, rather, arbitrariness may simply be an accident of modality.


Subject(s)
Comprehension , Recognition, Psychology , Semantics , Sign Language , Adolescent , Adult , Deafness/psychology , Deafness/rehabilitation , Female , Humans , Language Development , Male , Multilingualism , Pattern Recognition, Visual , Psycholinguistics , Reaction Time , Young Adult
SELECTION OF CITATIONS
SEARCH DETAIL
...