Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 3 de 3
Filter
Add more filters










Database
Language
Publication year range
1.
Behav Res Methods ; 2024 Jun 25.
Article in English | MEDLINE | ID: mdl-38918315

ABSTRACT

EMOKINE is a software package and dataset creation suite for emotional full-body movement research in experimental psychology, affective neuroscience, and computer vision. A computational framework, comprehensive instructions, a pilot dataset, observer ratings, and kinematic feature extraction code are provided to facilitate future dataset creations at scale. In addition, the EMOKINE framework outlines how complex sequences of movements may advance emotion research. Traditionally, often emotional-'action'-based stimuli are used in such research, like hand-waving or walking motions. Here instead, a pilot dataset is provided with short dance choreographies, repeated several times by a dancer who expressed different emotional intentions at each repetition: anger, contentment, fear, joy, neutrality, and sadness. The dataset was simultaneously filmed professionally, and recorded using XSENS® motion capture technology (17 sensors, 240 frames/second). Thirty-two statistics from 12 kinematic features were extracted offline, for the first time in one single dataset: speed, acceleration, angular speed, angular acceleration, limb contraction, distance to center of mass, quantity of motion, dimensionless jerk (integral), head angle (with regards to vertical axis and to back), and space (convex hull 2D and 3D). Average, median absolute deviation (MAD), and maximum value were computed as applicable. The EMOKINE software is appliable to other motion-capture systems and is openly available on the Zenodo Repository. Releases on GitHub include: (i) the code to extract the 32 statistics, (ii) a rigging plugin for Python for MVNX file-conversion to Blender format (MVNX=output file XSENS® system), and (iii) a Python-script-powered custom software to assist with blurring faces; latter two under GPLv3 licenses.

2.
Br J Psychol ; 115(1): 148-180, 2024 Feb.
Article in English | MEDLINE | ID: mdl-37740117

ABSTRACT

Theories of human emotion, including some emotion embodiment theories, suggest that our moods and affective states are reflected in the movements of our bodies. We used the reverse process for mood regulation; modulate body movements to regulate mood. Dancing is a type of full-body movement characterized by affective expressivity and, hence, offers the possibility to express different affective states through the same movement sequences. We tested whether the repeated imitation of a dancer performing two simple full-body dance movement sequences with different affective expressivity (happy or sad) could change mood states. Computer-based systems, using avatars as dance models to imitate, offer a series of advantages such as independence from physical contact and location. Therefore, we compared mood induction effects in two conditions: participants were asked to imitate dance movements from one of the two avatars showing: (a) videos of a human dancer model or (b) videos of a robot dancer model. The mood induction was successful for both happy and sad imitations, regardless of condition (human vs. robot avatar dance model). Moreover, the magnitude of happy mood induction and how much participants liked the task predicted work-related motivation after the mood induction. We conclude that mood regulation through dance movements is possible and beneficial in the work context.


Subject(s)
Imitative Behavior , Intention , Humans , Affect/physiology , Emotions/physiology , Happiness , Movement/physiology
3.
Sci Rep ; 13(1): 8757, 2023 05 30.
Article in English | MEDLINE | ID: mdl-37253770

ABSTRACT

Ekman famously contended that there are different channels of emotional expression (face, voice, body), and that emotion recognition ability confers an adaptive advantage to the individual. Yet, still today, much emotion perception research is focussed on emotion recognition from the face, and few validated emotionally expressive full-body stimuli sets are available. Based on research on emotional speech perception, we created a new, highly controlled full-body stimuli set. We used the same-sequence approach, and not emotional actions (e.g., jumping of joy, recoiling in fear): One professional dancer danced 30 sequences of (dance) movements five times each, expressing joy, anger, fear, sadness or a neutral state, one at each repetition. We outline the creation of a total of 150, 6-s-long such video stimuli, that show the dancer as a white silhouette on a black background. Ratings from 90 participants (emotion recognition, aesthetic judgment) showed that intended emotion was recognized above chance (chance: 20%; joy: 45%, anger: 48%, fear: 37%, sadness: 50%, neutral state: 51%), and that aesthetic judgment was sensitive to the intended emotion (beauty ratings: joy > anger > fear > neutral state, and sad > fear > neutral state). The stimuli set, normative values and code are available for download.


Subject(s)
Dancing , Speech Perception , Humans , Emotions , Anger , Fear , Facial Expression
SELECTION OF CITATIONS
SEARCH DETAIL
...