Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 3 de 3
Filter
Add more filters










Database
Language
Publication year range
1.
J Child Lang ; : 1-34, 2023 Mar 09.
Article in English | MEDLINE | ID: mdl-36891925

ABSTRACT

Much like early speech, early signing is characterised by modifications. Sign language phonology has been analysed on the feature level since the 1980s, yet acquisition studies predominately examine handshape, location, and movement. This study is the first to analyse the acquisition of phonology in the sign language of a Balinese village with a vibrant signing community and applies the same feature analysis to adult and child data. We analyse longitudinal data of four deaf children from the Kata Kolok Child Signing Corpus. The form comparison of child productions and adult targets yields three main findings: i) handshape modifications are most frequent, echoing cross-linguistic patterns; ii) modification rates of other features differ from previous studies, possibly due to differences in methodology or KK's phonology; iii) co-occurrence of modifications within a sign suggest feature interdependencies. We argue that nuanced approaches to child signing are necessary to understand the complexity of early signing.

2.
Emotion ; 20(8): 1435-1445, 2020 Dec.
Article in English | MEDLINE | ID: mdl-31478724

ABSTRACT

Are emotional expressions shaped by specialized innate mechanisms that guide learning, or do they develop exclusively from learning without innate preparedness? Here we test whether nonverbal affective vocalisations produced by bilaterally congenitally deaf adults contain emotional information that is recognisable to naive listeners. Because these deaf individuals have had no opportunity for auditory learning, the presence of such an association would imply that mappings between emotions and vocalizations are buffered against the absence of input that is typically important for their development and thus at least partly innate. We recorded nonverbal vocalizations expressing 9 emotions from 8 deaf individuals (435 tokens) and 8 matched hearing individuals (536 tokens). These vocalizations were submitted to an acoustic analysis and used in a recognition study in which naive listeners (n = 812) made forced-choice judgments. Our results show that naive listeners can reliably infer many emotional states from nonverbal vocalizations produced by deaf individuals. In particular, deaf vocalizations of fear, disgust, sadness, amusement, sensual pleasure, surprise, and relief were recognized at better-than-chance levels, whereas anger and achievement/triumph vocalizations were not. Differences were found on most acoustic features of the vocalizations produced by deaf as compared with hearing individuals. Our results suggest that there is an innate component to the associations between human emotions and vocalizations. (PsycInfo Database Record (c) 2020 APA, all rights reserved).


Subject(s)
Auditory Perception/physiology , Emotions/physiology , Adult , Aged , Female , Humans , Male , Middle Aged
3.
Lang Speech ; 52(Pt 2-3): 315-39, 2009.
Article in English | MEDLINE | ID: mdl-19624034

ABSTRACT

The eyebrows are used as conversational signals in face-to-face spoken interaction (Ekman, 1979). In Sign Language of the Netherlands (NGT), the eyebrows are typically furrowed in content questions, and raised in polar questions (Coerts, 1992). On the other hand, these eyebrow positions are also associated with anger and surprise, respectively, in general human communication (Ekman, 1993). This overlap in the functional load of the eyebrow positions results in a potential conflict for NGT signers when combining these functions simultaneously. In order to investigate the effect of the simultaneous realization of both functions on the eyebrow position we elicited instances of both question types with neutral affect and with various affective states. The data were coded using the Facial Action Coding System (FACS: Ekman, Friesen, & Hager, 2002) for type of brow movement as well as for intensity. FACS allows for the coding of muscle groups, which are termed Action Units (AUs) and which produce facial appearance changes. The results show that linguistic and affective functions of eyebrows may influence each other in NGT. That is, in surprised polar questions and angry content question a phonetic enhancement takes place of raising and furrowing, respectively. In the items with contrasting eyebrow movements, the grammatical and affective AUs are either blended (occur simultaneously) or they are realized sequentially. Interestingly, the absence of eyebrow raising (marked by AU 1+2) in angry polar questions, and the presence of eyebrow furrowing (realized by AU 4) in surprised content questions suggests that in general AU 4 may be phonetically stronger than AU 1 and AU 2, independent of its linguistic or affective function.


Subject(s)
Affect , Eyebrows , Linguistics , Psychomotor Performance , Sign Language , Adult , Facial Expression , Humans , Male , Netherlands , Psycholinguistics , Reproducibility of Results
SELECTION OF CITATIONS
SEARCH DETAIL
...