Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 6 de 6
Filter
Add more filters










Database
Language
Publication year range
1.
Proc Natl Acad Sci U S A ; 115(45): 11369-11376, 2018 11 06.
Article in English | MEDLINE | ID: mdl-30397135

ABSTRACT

Is there a universal hierarchy of the senses, such that some senses (e.g., vision) are more accessible to consciousness and linguistic description than others (e.g., smell)? The long-standing presumption in Western thought has been that vision and audition are more objective than the other senses, serving as the basis of knowledge and understanding, whereas touch, taste, and smell are crude and of little value. This predicts that humans ought to be better at communicating about sight and hearing than the other senses, and decades of work based on English and related languages certainly suggests this is true. However, how well does this reflect the diversity of languages and communities worldwide? To test whether there is a universal hierarchy of the senses, stimuli from the five basic senses were used to elicit descriptions in 20 diverse languages, including 3 unrelated sign languages. We found that languages differ fundamentally in which sensory domains they linguistically code systematically, and how they do so. The tendency for better coding in some domains can be explained in part by cultural preoccupations. Although languages seem free to elaborate specific sensory domains, some general tendencies emerge: for example, with some exceptions, smell is poorly coded. The surprise is that, despite the gradual phylogenetic accumulation of the senses, and the imbalances in the neural tissue dedicated to them, no single hierarchy of the senses imposes itself upon language.


Subject(s)
Auditory Perception/physiology , Language , Olfactory Perception/physiology , Psycholinguistics , Taste Perception/physiology , Touch Perception/physiology , Visual Perception/physiology , Africa , Asia , Cross-Cultural Comparison , Cultural Diversity , Humans , Latin America , Phonetics , Semantics , Sign Language
2.
Appl Psycholinguist ; 39(5): 961-987, 2018 Sep.
Article in English | MEDLINE | ID: mdl-31595097

ABSTRACT

American Sign Language (ASL) and English differ in linguistic resources available to express visual-spatial information. In a referential communication task, we examined the effect of language modality on the creation and mutual acceptance of reference to non-nameable figures. In both languages, description times reduced over iterations and references to the figures' geometric properties ("shape-based reference") declined over time in favor of expressions describing the figures' resemblance to nameable objects ("analogy-based reference"). ASL signers maintained a preference for shape-based reference until the final (sixth) round, while English speakers transitioned toward analogy-based reference by Round 3. Analogy-based references were more time efficient (associated with shorter round description times). Round completion times were longer for ASL than for English, possibly due to gaze demands of the task and/or to more shape-based descriptions. Signers' referring expressions remained unaffected by figure complexity while speakers preferred analogy-based expressions for complex figures and shape-based expressions for simple figures. Like speech, co-speech gestures decreased over iterations. Gestures primarily accompanied shape-based references, but listeners rarely looked at these gestures, suggesting that they were recruited to aid the speaker rather than the addressee. Overall, different linguistic resources (classifier constructions vs. geometric vocabulary) imposed distinct demands on referring strategies in ASL and English.

3.
Patient Educ Couns ; 2015 Jun 22.
Article in English | MEDLINE | ID: mdl-26162955

ABSTRACT

OBJECTIVES: By juxtaposing literature in signed language interpreting with that of spoken language interpreting, we provide a narrative review to explore the complexity of emotion management in interpreter-mediated medical encounters. METHODS: We conduct literature search through library databases and Google Scholar using varied combinations of search terms, including interpreter, emotion, culture, and health care. RESULTS: We first examine (a) interpreters' management and performance of others' emotions, (b) interpreters' management and performance of their own emotions, and (c) impacts of emotion work for healthcare interpreters. CONCLUSION: By problematizing the roles and functions of emotion and emotion work in interpreter-mediated medical encounters, we propose a normative model to guide future research and practices of interpreters' emotion management in cross-cultural care. PRACTICE IMPLICATIONS: Quality and equality of care should serve as the guiding principle for interpreters' decision-making about their emotions and emotion work. Rather than adopting a predetermined practice, interpreters should evaluate and prioritize the various clinical, interpersonal, and therapeutic objectives as they consider the best practice in managing their own and other speakers' emotions.

4.
Interpreting (Amst) ; 17(2): 145-166, 2015.
Article in English | MEDLINE | ID: mdl-28855844

ABSTRACT

Among spoken language interpreters, a long-standing question regarding directionality is whether interpretations are better when working into one's native language (L1) or into one's 'active' non-native language (L2). In contrast to studies that support working into L1, signed language interpreters report a preference for working into L2. Accordingly, we investigated whether signed language interpreters actually perform better when interpreting into their L2 (American Sign Language) or into their L1 (English). Interpretations by 30 interpreters (15 novice, 15 expert), delivered under experimental conditions, were assessed on accuracy (semantic content) and articulation quality (flow, speed, and prosody). For both measures, novices scored significantly better when interpreting into English (L1); experts were equally accurate, and showed similar articulation quality, in both directions. The results for the novice interpreters support the hypothesis that the difficulty of L2 production drives interpreting performance in relation to directionality. Findings also indicate a disconnect between direction preference and interpreting performance. Novices' perception of their ASL production ability may be distorted because they can default to fingerspelling and transcoding. Weakness in self-monitoring of signing may also lead novices to overrate their ASL skills. Interpreter educators should stress misperceptions of signing proficiency that arise from available, but inappropriate, strategies.

5.
Cognition ; 134: 232-44, 2015 Jan.
Article in English | MEDLINE | ID: mdl-25460395

ABSTRACT

A striking asymmetry in human sensorimotor processing is that humans synchronize movements to rhythmic sound with far greater precision than to temporally equivalent visual stimuli (e.g., to an auditory vs. a flashing visual metronome). Traditionally, this finding is thought to reflect a fundamental difference in auditory vs. visual processing, i.e., superior temporal processing by the auditory system and/or privileged coupling between the auditory and motor systems. It is unclear whether this asymmetry is an inevitable consequence of brain organization or whether it can be modified (or even eliminated) by stimulus characteristics or by experience. With respect to stimulus characteristics, we found that a moving, colliding visual stimulus (a silent image of a bouncing ball with a distinct collision point on the floor) was able to drive synchronization nearly as accurately as sound in hearing participants. To study the role of experience, we compared synchronization to flashing metronomes in hearing and profoundly deaf individuals. Deaf individuals performed better than hearing individuals when synchronizing with visual flashes, suggesting that cross-modal plasticity enhances the ability to synchronize with temporally discrete visual stimuli. Furthermore, when deaf (but not hearing) individuals synchronized with the bouncing ball, their tapping patterns suggest that visual timing may access higher-order beat perception mechanisms for deaf individuals. These results indicate that the auditory advantage in rhythmic synchronization is more experience- and stimulus-dependent than has been previously reported.


Subject(s)
Auditory Perception/physiology , Deafness/physiopathology , Psychomotor Performance/physiology , Time Perception/physiology , Visual Perception/physiology , Adult , Female , Humans , Male
6.
Biling (Camb Engl) ; 16(3): 624-636, 2013 Jul.
Article in English | MEDLINE | ID: mdl-23833563

ABSTRACT

Spoken language (unimodal) interpreters often prefer to interpret from their non-dominant language (L2) into their native language (L1). Anecdotally, signed language (bimodal) interpreters express the opposite bias, preferring to interpret from L1 (spoken language) into L2 (signed language). We conducted a large survey study (N=1,359) of both unimodal and bimodal interpreters that confirmed these preferences. The L1 to L2 direction preference was stronger for novice than expert bimodal interpreters, while novice and expert unimodal interpreters did not differ from each other. The results indicated that the different direction preferences for bimodal and unimodal interpreters cannot be explained by language production-comprehension asymmetries or by work or training experiences. We suggest that modality and language-specific features of signed languages drive the directionality preferences of bimodal interpreters. Specifically, we propose that fingerspelling, transcoding (literal word-for-word translation), self-monitoring, and consumers' linguistic variation influence the preference of bimodal interpreters for working into their L2.

SELECTION OF CITATIONS
SEARCH DETAIL
...