Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 2 de 2
Filter
Add more filters










Database
Language
Publication year range
1.
Brain Lang ; 252: 105413, 2024 May.
Article in English | MEDLINE | ID: mdl-38608511

ABSTRACT

Sign languages (SLs) are expressed through different bodily actions, ranging from re-enactment of physical events (constructed action, CA) to sequences of lexical signs with internal structure (plain telling, PT). Despite the prevalence of CA in signed interactions and its significance for SL comprehension, its neural dynamics remain unexplored. We examined the processing of different types of CA (subtle, reduced, and overt) and PT in 35 adult deaf or hearing native signers. The electroencephalographic-based processing of signed sentences with incongruent targets was recorded. Attenuated N300 and early N400 were observed for CA in deaf but not in hearing signers. No differences were found between sentences with CA types in all signers, suggesting a continuum from PT to overt CA. Deaf signers focused more on body movements; hearing signers on faces. We conclude that CA is processed less effortlessly than PT, arguably because of its strong focus on bodily actions.


Subject(s)
Comprehension , Deafness , Electroencephalography , Sign Language , Humans , Comprehension/physiology , Adult , Male , Female , Deafness/physiopathology , Young Adult , Brain/physiology , Evoked Potentials/physiology
2.
J Eye Mov Res ; 11(2)2018 May 07.
Article in English | MEDLINE | ID: mdl-33828688

ABSTRACT

Both eye tracking and motion capture technologies are nowadays frequently used in human sciences, although both technologies are usually used separately. However, measuring both eye and body movements simultaneously would offer great potential for investigating crossmodal interaction in human (e.g. music and language-related) behavior. Here we combined an Ergoneers Dikablis head mounted eye tracker with a Qualisys Oqus optical motion capture system. In order to synchronize the recordings of both devices, we developed a generalizable solution that does not rely on any (cost-intensive) ready-made / company-provided synchronization solution. At the beginning of each recording, the participant nods quickly while fixing on a target while keeping the eyes open - a motion yielding a sharp vertical displacement in both mocap and eye data. This displacement can be reliably detected with a peak-picking algorithm and used for accurately aligning the mocap and eye data. This method produces accurate synchronization results in the case of clean data and therefore provides an attractive alternative to costly plug-ins, as well as a solution in case ready-made synchronization solutions are unavailable.

SELECTION OF CITATIONS
SEARCH DETAIL
...