Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 7 de 7
Filter
Add more filters










Database
Language
Publication year range
1.
Hum Mov Sci ; 80: 102867, 2021 Dec.
Article in English | MEDLINE | ID: mdl-34492422

ABSTRACT

This study investigated how humans adapt to a partner's movement in a joint pick-and-place task and examined the role of gaze behavior and personality traits in adapting to a partner. Two participants sitting side-by-side transported a cup from one end of a table to the other. The participant sitting on the left (the agent) moved the cup to an intermediate position from where the participant sitting on the right (the partner) transported it to a goal position with varying orientations. Hand, finger, cup movements and gaze behavior were recorded synchronously via motion tracking and portable eye tracking devices. Results showed interindividual differences in the extent of the agents' motor adaptation to the joint action goal, which were accompanied by differences in gaze patterns. The longer agents directed their gaze to a cue indicating the goal orientation, the more they adapted the rotation of the cup's handle when placing it at the intermediate position. Personality trait assessment showed that higher extraverted tendencies to strive for social potency went along with more adaptation to the joint goal. These results indicate that agents who consider their partner's end-state comfort use their gaze to gather more information about the joint action goal compared to agents who do not. Moreover, the disposition to enjoy leadership and make decisions in interpersonal situations seems to play a role in determining who adapts to a partner's task in joint action.


Subject(s)
Adaptation, Physiological , Extraversion, Psychological , Hand , Humans , Movement , Psychomotor Performance , Rotation
2.
Exp Brain Res ; 239(3): 923-936, 2021 Mar.
Article in English | MEDLINE | ID: mdl-33427949

ABSTRACT

This study compared how two virtual display conditions of human body expressions influenced explicit and implicit dimensions of emotion perception and response behavior in women and men. Two avatars displayed emotional interactions (angry, sad, affectionate, happy) in a "pictorial" condition depicting the emotional interactive partners on a screen within a virtual environment and a "visual" condition allowing participants to share space with the avatars, thereby enhancing co-presence and agency. Subsequently to stimulus presentation, explicit valence perception and response tendency (i.e. the explicit tendency to avoid or approach the situation) were assessed on rating scales. Implicit responses, i.e. postural and autonomic responses towards the observed interactions were measured by means of postural displacement and changes in skin conductance. Results showed that self-reported presence differed between pictorial and visual conditions, however, it was not correlated with skin conductance responses. Valence perception was only marginally influenced by the virtual condition and not at all by explicit response behavior. There were gender-mediated effects on postural response tendencies as well as gender differences in explicit response behavior but not in valence perception. Exploratory analyses revealed a link between valence perception and preferred behavioral response in women but not in men. We conclude that the display condition seems to influence automatic motivational tendencies but not higher level cognitive evaluations. Moreover, intragroup differences in explicit and implicit response behavior highlight the importance of individual factors beyond gender.


Subject(s)
Emotions , Adult , Anxiety , Facial Expression , Female , Humans , Judgment , Male , Motivation , Young Adult
3.
Exp Brain Res ; 238(9): 1813-1826, 2020 Sep.
Article in English | MEDLINE | ID: mdl-32500297

ABSTRACT

In everyday life, our brain constantly builds spatial representations of the objects surrounding us. Many studies have investigated the nature of these spatial representations. It is well established that we use allocentric information in real-time and memory-guided movements. Most studies relied on small-scale and static experiments, leaving it unclear whether similar paradigms yield the same results on a larger scale using dynamic objects. We created a virtual reality task that required participants to encode the landing position of a virtual ball thrown by an avatar. Encoding differed in the nature of the task in that it was either purely perceptual ("view where the ball landed while standing still"-Experiment 1) or involved an action ("intercept the ball with the foot just before it lands"-Experiment 2). After encoding, participants were asked to place a real ball at the remembered landing position in the virtual scene. In some trials, we subtly shifted either the thrower or the midfield line on a soccer field to manipulate allocentric coding of the ball's landing position. In both experiments, we were able to replicate classic findings from small-scale experiments and to generalize these results to different encoding tasks (perception vs. action) and response modes (reaching vs. walking-and-placing). Moreover, we found that participants preferably encoded the ball relative to the thrower when they had to intercept the ball, suggesting that the use of allocentric information is determined by the encoding task by enhancing task-relevant allocentric information. Our findings indicate that results previously obtained from memory-guided reaching are not restricted to small-scale movements, but generalize to whole-body movements in large-scale dynamic scenes.


Subject(s)
Space Perception , Virtual Reality , Humans , Memory , Mental Recall , Movement
4.
Atten Percept Psychophys ; 82(4): 2076-2084, 2020 May.
Article in English | MEDLINE | ID: mdl-31797178

ABSTRACT

Correctly perceiving the movements of opponents is essential in everyday life as well as in many sports. Several studies have shown a better prediction performance for detailed stimuli compared to point-light displays (PLDs). However, it remains unclear whether differences in prediction performance result from explicit information about articulation or from information about body shape. We therefore presented three different types of stimuli (PLDs, stick figures, and skinned avatars) with different amounts of available information of soccer players' run-ups. Stimulus presentation was faded out at ball contact. Participants had to react to the perceived shot direction with a full-body movement. Results showed no differences for time to virtual ball contact between presentation modes. However, prediction performance was significantly better for avatars and stick figures compared to PLDs, but did not differ between avatars and stick figures, suggesting that explicit information about the articulation of the major joints is mainly relevant for better prediction performance, and plays a larger role than detailed information about body shape. We also tracked eye movements and found that gaze behavior for avatars differed from those for PLDs and stick figures, with no significant differences between PLDs and stick figures. This effect was due to more and longer fixations on the head when avatars were presented.


Subject(s)
Space Perception , Eye Movements , Human Body , Humans , Movement , Psychomotor Performance , Soccer
5.
Front Psychol ; 9: 682, 2018.
Article in English | MEDLINE | ID: mdl-29867656

ABSTRACT

This article reviews research on the gaze behavior of penalty takers in football. It focuses on how artificial versus representative experimental conditions affect gaze behavior in this far-aiming task. Findings reveal that-irrespective of the representativeness of the experimental conditions-different instructions regarding the aiming strategy and different threat conditions lead to different gaze patterns. Results also reveal that the goal size and the distance to the goal did not affect the gaze behavior. Moreover, it is particularly run-up conditions that lead to differences. These can be either artificial or more natural. During a natural run-up, penalty takers direct their gaze mainly toward the ball. When there is no run-up, they do not direct their gaze toward the ball. Hence, in order to deliver generalizable results with which to interpret gaze strategies, it seems important to use a run-up with a minimum length that is comparable to that in a real-life situation.

6.
Front Psychol ; 9: 19, 2018.
Article in English | MEDLINE | ID: mdl-29434560

ABSTRACT

Gaze behavior in natural scenes has been shown to be influenced not only by top-down factors such as task demands and action goals but also by bottom-up factors such as stimulus salience and scene context. Whereas gaze behavior in the context of static pictures emphasizes spatial accuracy, gazing in natural scenes seems to rely more on where to direct the gaze involving both anticipative components and an evaluation of ongoing actions. Not much is known about gaze behavior in far-aiming tasks in which multiple task-relevant targets and distractors compete for the allocation of visual attention via gaze. In the present study, we examined gaze behavior in the far-aiming task of taking a soccer penalty. This task contains a proximal target, the ball; a distal target, an empty location within the goal; and a salient distractor, the goalkeeper. Our aim was to investigate where participants direct their gaze in a natural environment with multiple potential fixation targets that differ in task relevance and salience. Results showed that the early phase of the run-up seems to be driven by both the salience of the stimulus setting and the need to perform a spatial calibration of the environment. The late run-up, in contrast, seems to be controlled by attentional demands of the task with penalty takers having habitualized a visual routine that is not disrupted by external influences (e.g., the goalkeeper). In addition, when trying to shoot a ball as accurately as possible, penalty takers directed their gaze toward the ball in order to achieve optimal foot-ball contact. These results indicate that whether gaze is driven by salience of the stimulus setting or by attentional demands depends on the phase of the actual task.

7.
Exp Brain Res ; 235(11): 3479-3486, 2017 11.
Article in English | MEDLINE | ID: mdl-28840269

ABSTRACT

Task difficulty affects both gaze behavior and hand movements. Therefore, the present study aimed to investigate how task difficulty modulates gaze behaviour with respect to the balance between visually monitoring the ongoing action and prospectively collecting visual information about the future course of the ongoing action. For this, we examined sequences of reach and transport movements of water glasses that differed in task difficulty using glasses filled to different levels. Participants had to grasp water glasses with different filling levels (100, 94, 88, 82, and 76%) and transport them to a target. Subsequently, they had to grasp the next water glass and transport it to a target on the opposite side. Results showed significant differences in both gaze and movement kinematics for higher filling levels. However, there were no relevant differences between the 88, 82, and 76% filling levels. Results revealed a significant influence of task difficulty on the interaction between gaze and kinematics during transport and a strong influence of task difficulty on gaze during the release phase between different grasp-to-place movements. In summary, we found a movement and gaze pattern revealing an influence of task difficulty that was especially evident for the later phases of transport and release.


Subject(s)
Biomechanical Phenomena/physiology , Fixation, Ocular/physiology , Motor Activity/physiology , Psychomotor Performance/physiology , Adolescent , Adult , Humans , Young Adult
SELECTION OF CITATIONS
SEARCH DETAIL
...