Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 14 de 14
Filter
Add more filters










Publication year range
1.
IEEE Trans Vis Comput Graph ; 30(5): 2162-2172, 2024 May.
Article in English | MEDLINE | ID: mdl-38437115

ABSTRACT

Embodied personalized avatars are a promising new tool to investigate moral decision-making by transposing the user into the "middle of the action" in moral dilemmas. Here, we tested whether avatar personalization and motor control could impact moral decision-making, physiological reactions and reaction times, as well as embodiment, presence and avatar perception. Seventeen participants, who had their personalized avatars created in a previous study, took part in a range of incongruent (i.e., harmful action led to better overall outcomes) and congruent (i.e., harmful action led to trivial outcomes) moral dilemmas as the drivers of a semi-autonomous car. They embodied four different avatars (counterbalanced - personalized motor control, personalized no motor control, generic motor control, generic no motor control). Overall, participants took a utilitarian approach by performing harmful actions only to maximize outcomes. We found increased physiological arousal (SCRs and heart rate) for personalized avatars compared to generic avatars, and increased SCRs in motor control conditions compared to no motor control. Participants had slower reaction times when they had motor control over their avatars, possibly hinting at more elaborate decision-making processes. Presence was also higher in motor control compared to no motor control conditions. Embodiment ratings were higher for personalized avatars, and generally, personalization and motor control were perceptually positive features. These findings highlight the utility of personalized avatars and open up a range of future research possibilities that could benefit from the affordances of this technology and simulate, more closely than ever, real-life action.


Subject(s)
Autonomous Vehicles , Avatar , Humans , Decision Making/physiology , Computer Graphics , Morals
2.
Emotion ; 24(2): 495-505, 2024 Mar.
Article in English | MEDLINE | ID: mdl-37561517

ABSTRACT

People readily and automatically process facial emotion and identity, and it has been reported that these cues are processed both dependently and independently. However, this question of identity independent encoding of emotions has only been examined using posed, often exaggerated expressions of emotion, that do not account for the substantial individual differences in emotion recognition. In this study, we ask whether people's unique beliefs of how emotions should be reflected in facial expressions depend on the identity of the face. To do this, we employed a genetic algorithm where participants created facial expressions to represent different emotions. Participants generated facial expressions of anger, fear, happiness, and sadness, on two different identities. Facial features were controlled by manipulating a set of weights, allowing us to probe the exact positions of faces in high-dimensional expression space. We found that participants created facial expressions belonging to each identity in a similar space that was unique to the participant, for angry, fearful, and happy expressions, but not sad. However, using a machine learning algorithm that examined the positions of faces in expression space, we also found systematic differences between the two identities' expressions across participants. This suggests that participants' beliefs of how an emotion should be reflected in a facial expression are unique to them and identity independent, although there are also some systematic differences in the facial expressions between two identities that are common across all individuals. (PsycInfo Database Record (c) 2024 APA, all rights reserved).


Subject(s)
Emotions , Facial Recognition , Humans , Anger , Happiness , Fear , Sadness , Facial Expression
3.
PLoS One ; 18(11): e0293917, 2023.
Article in English | MEDLINE | ID: mdl-37943887

ABSTRACT

This study examined if occluded joint locations, obtained from 2D markerless motion capture (single camera view), produced 2D joint angles with reduced agreement compared to visible joints, and if 2D frontal plane joint angles were usable for practical applications. Fifteen healthy participants performed over-ground walking whilst recorded by fifteen marker-based cameras and two machine vision cameras (frontal and sagittal plane). Repeated measures Bland-Altman analysis illustrated that markerless standard deviation of bias and limits of agreement for the occluded-side hip and knee joint angles in the sagittal plane were double that of the camera-side (visible) hip and knee. Camera-side sagittal plane knee and hip angles were near or within marker-based error values previously observed. While frontal plane limits of agreement accounted for 35-46% of total range of motion at the hip and knee, Bland-Altman bias and limits of agreement (-4.6-1.6 ± 3.7-4.2˚) were actually similar to previously reported marker-based error values. This was not true for the ankle, where the limits of agreement (± 12˚) were still too high for practical applications. Our results add to previous literature, highlighting shortcomings of current pose estimation algorithms and labelled datasets. As such, this paper finishes by reviewing methods for creating anatomically accurate markerless training data using marker-based motion capture data.


Subject(s)
Knee Joint , Motion Capture , Humans , Biomechanical Phenomena , Walking , Lower Extremity , Motion
4.
Proc Natl Acad Sci U S A ; 119(45): e2201380119, 2022 11 08.
Article in English | MEDLINE | ID: mdl-36322724

ABSTRACT

Emotional communication relies on a mutual understanding, between expresser and viewer, of facial configurations that broadcast specific emotions. However, we do not know whether people share a common understanding of how emotional states map onto facial expressions. This is because expressions exist in a high-dimensional space too large to explore in conventional experimental paradigms. Here, we address this by adapting genetic algorithms and combining them with photorealistic three-dimensional avatars to efficiently explore the high-dimensional expression space. A total of 336 people used these tools to generate facial expressions that represent happiness, fear, sadness, and anger. We found substantial variability in the expressions generated via our procedure, suggesting that different people associate different facial expressions to the same emotional state. We then examined whether variability in the facial expressions created could account for differences in performance on standard emotion recognition tasks by asking people to categorize different test expressions. We found that emotion categorization performance was explained by the extent to which test expressions matched the expressions generated by each individual. Our findings reveal the breadth of variability in people's representations of facial emotions, even among typical adult populations. This has profound implications for the interpretation of responses to emotional stimuli, which may reflect individual differences in the emotional category people attribute to a particular facial expression, rather than differences in the brain mechanisms that produce emotional responses.


Subject(s)
Facial Recognition , Individuality , Adult , Humans , Facial Expression , Emotions/physiology , Anger/physiology , Algorithms
5.
J Biomech ; 144: 111338, 2022 11.
Article in English | MEDLINE | ID: mdl-36252308

ABSTRACT

This study presented a fully automated deep learning based markerless motion capture workflow and evaluated its performance against marker-based motion capture during overground running, walking and counter movement jumping. Multi-view high speed (200 Hz) image data were collected concurrently with marker-based motion capture (criterion data), permitting a direct comparison between methods. Lower limb kinematic data for 15 participants were computed using 2D pose estimation, our 3D fusion process and OpenSim based inverse kinematics modelling. Results demonstrated high levels of agreement for lower limb joint angles, with mean differences ranging "0.1° - 10.5° for hip (3 DoF) joint rotations, and 0.7° - 3.9° for knee (1 DoF) and ankle (2 DoF) rotations. These differences generally fall within the documented uncertainties of marker-based motion capture, suggesting that our markerless approach could be used for appropriate biomechanics applications. We used an open-source, modular and customisable workflow, allowing for integration with other popular biomechanics tools such as OpenSim. By developing open-source tools, we hope to facilitate the democratisation of markerless motion capture technology and encourage the transparent development of markerless methods. This presents exciting opportunities for biomechanics researchers and practitioners to capture large amounts of high quality, ecologically valid data both in the laboratory and in the wild.


Subject(s)
Knee Joint , Movement , Humans , Workflow , Biomechanical Phenomena , Motion
6.
PLoS One ; 16(11): e0259624, 2021.
Article in English | MEDLINE | ID: mdl-34780514

ABSTRACT

This study describes the development, evaluation and application of a computer vision and deep learning system capable of capturing sprinting and skeleton push start step characteristics and mass centre velocities (sled and athlete). Movement data were captured concurrently by a marker-based motion capture system and a custom markerless system. High levels of agreement were found between systems, particularly for spatial based variables (step length error 0.001 ± 0.012 m) while errors for temporal variables (ground contact time and flight time) were on average within ± 1.5 frames of the criterion measures. Comparisons of sprinting and pushing revealed decreased mass centre velocities as a result of pushing the sled but step characteristics were comparable to sprinting when aligned as a function of step velocity. There were large asymmetries between the inside and outside leg during pushing (e.g. 0.22 m mean step length asymmetry) which were not present during sprinting (0.01 m step length asymmetry). The observed asymmetries suggested that force production capabilities during ground contact were compromised for the outside leg. The computer vision based methods tested in this research provide a viable alternative to marker-based motion capture systems. Furthermore, they can be deployed into challenging, real world environments to non-invasively capture data where traditional approaches are infeasible.


Subject(s)
Skeleton/physiology , Athletes , Deep Learning , Female , Humans , Male , Motion , Musculoskeletal System
7.
R Soc Open Sci ; 8(10): 202251, 2021 Oct.
Article in English | MEDLINE | ID: mdl-34659775

ABSTRACT

Emotional facial expressions critically impact social interactions and cognition. However, emotion research to date has generally relied on the assumption that people represent categorical emotions in the same way, using standardized stimulus sets and overlooking important individual differences. To resolve this problem, we developed and tested a task using genetic algorithms to derive assumption-free, participant-generated emotional expressions. One hundred and five participants generated a subjective representation of happy, angry, fearful and sad faces. Population-level consistency was observed for happy faces, but fearful and sad faces showed a high degree of variability. High test-retest reliability was observed across all emotions. A separate group of 108 individuals accurately identified happy and angry faces from the first study, while fearful and sad faces were commonly misidentified. These findings are an important first step towards understanding individual differences in emotion representation, with the potential to reconceptualize the way we study atypical emotion processing in future research.

8.
Sci Rep ; 11(1): 20673, 2021 10 19.
Article in English | MEDLINE | ID: mdl-34667207

ABSTRACT

Human movement researchers are often restricted to laboratory environments and data capture techniques that are time and/or resource intensive. Markerless pose estimation algorithms show great potential to facilitate large scale movement studies 'in the wild', i.e., outside of the constraints imposed by marker-based motion capture. However, the accuracy of such algorithms has not yet been fully evaluated. We computed 3D joint centre locations using several pre-trained deep-learning based pose estimation methods (OpenPose, AlphaPose, DeepLabCut) and compared to marker-based motion capture. Participants performed walking, running and jumping activities while marker-based motion capture data and multi-camera high speed images (200 Hz) were captured. The pose estimation algorithms were applied to 2D image data and 3D joint centre locations were reconstructed. Pose estimation derived joint centres demonstrated systematic differences at the hip and knee (~ 30-50 mm), most likely due to mislabeling of ground truth data in the training datasets. Where systematic differences were lower, e.g., the ankle, differences of 1-15 mm were observed depending on the activity. Markerless motion capture represents a highly promising emerging technology that could free movement scientists from laboratory environments but 3D joint centre locations are not yet consistently comparable to marker-based motion capture.


Subject(s)
Movement/physiology , Algorithms , Ankle Joint/physiology , Biomechanical Phenomena/physiology , Female , Gait/physiology , Humans , Knee Joint/physiology , Lower Extremity/physiology , Male , Motion , Running/physiology , Walking/physiology
9.
Sensors (Basel) ; 21(8)2021 Apr 20.
Article in English | MEDLINE | ID: mdl-33924266

ABSTRACT

The ability to accurately and non-invasively measure 3D mass centre positions and their derivatives can provide rich insight into the physical demands of sports training and competition. This study examines a method for non-invasively measuring mass centre velocities using markerless human pose estimation and Kalman smoothing. Marker (Qualysis) and markerless (OpenPose) motion capture data were captured synchronously for sprinting and skeleton push starts. Mass centre positions and velocities derived from raw markerless pose estimation data contained large errors for both sprinting and skeleton pushing (mean ± SD = 0.127 ± 0.943 and -0.197 ± 1.549 m·s-1, respectively). Signal processing methods such as Kalman smoothing substantially reduced the mean error (±SD) in horizontal mass centre velocities (0.041 ± 0.257 m·s-1) during sprinting but the precision remained poor. Applying pose estimation to activities which exhibit unusual body poses (e.g., skeleton pushing) appears to elicit more erroneous results due to poor performance of the pose estimation algorithm. Researchers and practitioners should apply these methods with caution to activities beyond sprinting as pose estimation algorithms may not generalise well to the activity of interest. Retraining the model using activity specific data to produce more specialised networks is therefore recommended.


Subject(s)
Algorithms , Signal Processing, Computer-Assisted , Humans , Motion , Skeleton
10.
Sports Med Open ; 4(1): 24, 2018 Jun 05.
Article in English | MEDLINE | ID: mdl-29869300

ABSTRACT

BACKGROUND: The study of human movement within sports biomechanics and rehabilitation settings has made considerable progress over recent decades. However, developing a motion analysis system that collects accurate kinematic data in a timely, unobtrusive and externally valid manner remains an open challenge. MAIN BODY: This narrative review considers the evolution of methods for extracting kinematic information from images, observing how technology has progressed from laborious manual approaches to optoelectronic marker-based systems. The motion analysis systems which are currently most widely used in sports biomechanics and rehabilitation do not allow kinematic data to be collected automatically without the attachment of markers, controlled conditions and/or extensive processing times. These limitations can obstruct the routine use of motion capture in normal training or rehabilitation environments, and there is a clear desire for the development of automatic markerless systems. Such technology is emerging, often driven by the needs of the entertainment industry, and utilising many of the latest trends in computer vision and machine learning. However, the accuracy and practicality of these systems has yet to be fully scrutinised, meaning such markerless systems are not currently in widespread use within biomechanics. CONCLUSIONS: This review aims to introduce the key state-of-the-art in markerless motion capture research from computer vision that is likely to have a future impact in biomechanics, while considering the challenges with accuracy and robustness that are yet to be addressed.

11.
PLoS One ; 12(11): e0187513, 2017.
Article in English | MEDLINE | ID: mdl-29149206

ABSTRACT

In this paper, we present an automatic system for the analysis and labeling of structural scenes, floor plan drawings in Computer-aided Design (CAD) format. The proposed system applies a fusion strategy to detect and recognize various components of CAD floor plans, such as walls, doors, windows and other ambiguous assets. Technically, a general rule-based filter parsing method is fist adopted to extract effective information from the original floor plan. Then, an image-processing based recovery method is employed to correct information extracted in the first step. Our proposed method is fully automatic and real-time. Such analysis system provides high accuracy and is also evaluated on a public website that, on average, archives more than ten thousands effective uses per day and reaches a relatively high satisfaction rate.


Subject(s)
Automation , Computer-Aided Design , Image Processing, Computer-Assisted
12.
J Opt Soc Am A Opt Image Sci Vis ; 33(9): 1798-811, 2016 Sep 01.
Article in English | MEDLINE | ID: mdl-27607503

ABSTRACT

A user-centric method for fast, interactive, robust, and high-quality shadow removal is presented. Our algorithm can perform detection and removal in a range of difficult cases, such as highly textured and colored shadows. To perform detection, an on-the-fly learning approach is adopted guided by two rough user inputs for the pixels of the shadow and the lit area. After detection, shadow removal is performed by registering the penumbra to a normalized frame, which allows us efficient estimation of nonuniform shadow illumination changes, resulting in accurate and robust removal. Another major contribution of this work is the first validated and multiscene category ground truth for shadow removal algorithms. This data set containing 186 images eliminates inconsistencies between shadow and shadow-free images and provides a range of different shadow types such as soft, textured, colored, and broken shadow. Using this data, the most thorough comparison of state-of-the-art shadow removal methods to date is performed, showing our proposed algorithm to outperform the state of the art across several measures and shadow categories. To complement our data set, an online shadow removal benchmark website is also presented to encourage future open comparisons in this challenging field of research.

13.
IEEE Trans Vis Comput Graph ; 19(7): 1242-51, 2013 Jul.
Article in English | MEDLINE | ID: mdl-23661014

ABSTRACT

We introduce a video-based approach for producing water surface models. Recent advances in this field output high-quality results but require dedicated capturing devices and only work in limited conditions. In contrast, our method achieves a good tradeoff between the visual quality and the production cost: It automatically produces a visually plausible animation using a single viewpoint video as the input. Our approach is based on two discoveries: first, shape from shading (SFS) is adequate to capture the appearance and dynamic behavior of the example water; second, shallow water model can be used to estimate a velocity field that produces complex surface dynamics. We will provide qualitative evaluation of our method and demonstrate its good performance across a wide range of scenes.


Subject(s)
Models, Theoretical , Video Recording , Water , Water Movements
14.
Emotion ; 7(4): 730-5, 2007 Nov.
Article in English | MEDLINE | ID: mdl-18039040

ABSTRACT

Detecting cooperative partners in situations that have financial stakes is crucial to successful social exchange. The authors tested whether humans are sensitive to subtle facial dynamics of counterparts when deciding whether to trust and cooperate. Participants played a 2-person trust game before which the facial dynamics of the other player were manipulated using brief (<6 s) but highly realistic facial animations. Results showed that facial dynamics significantly influenced participants' (a) choice of with whom to play the game and (b) decisions to cooperate. It was also found that inferences about the other player's trustworthiness mediated these effects of facial dynamics on cooperative behavior.


Subject(s)
Cooperative Behavior , Facial Expression , Trust , Adolescent , Adult , Decision Making , Female , Humans , Male
SELECTION OF CITATIONS
SEARCH DETAIL
...