Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 9 de 9
Filter
Add more filters










Database
Language
Publication year range
1.
IEEE Trans Vis Comput Graph ; 30(5): 2785-2795, 2024 May.
Article in English | MEDLINE | ID: mdl-38437106

ABSTRACT

While data is vital to better understand and model interactions within human crowds, capturing real crowd motions is extremely challenging. Virtual Reality (VR) demonstrated its potential to help, by immersing users into either simulated virtual crowds based on autonomous agents, or within motion-capture-based crowds. In the latter case, users' own captured motion can be used to progressively extend the size of the crowd, a paradigm called Record-and-Replay (2R). However, both approaches demonstrated several limitations which impact the quality of the acquired crowd data. In this paper, we propose the new concept of contextual crowds to leverage both crowd simulation and the 2R paradigm towards more consistent crowd data. We evaluate two different strategies to implement it, namely a Replace-Record-Replay (3R) paradigm where users are initially immersed into a simulated crowd whose agents are successively replaced by the user's captured-data, and a Replace-Record-Replay-Responsive (4R) paradigm where the pre-recorded agents are additionally endowed with responsive capabilities. These two paradigms are evaluated through two real-world-based scenarios replicated in VR. Our results suggest that the behaviors observed in VR users with surrounding agents from the beginning of the recording process are made much more natural, enabling 3R or 4R paradigms to improve the consistency of captured crowd datasets.

2.
Article in English | MEDLINE | ID: mdl-37440385

ABSTRACT

During mid-air interactions, common approaches (such as the god-object method) typically rely on visually constraining the user's avatar to avoid visual interpenetrations with the virtual environment in the absence of kinesthetic feedback. This paper explores two methods which influence how the position mismatch (positional offset) between users' real and virtual hands is recovered when releasing the contact with virtual objects. The first method (sticky) constrains the user's virtual hand until the mismatch is recovered, while the second method (unsticky) employs an adaptive offset recovery method. In the first study, we explored the effect of positional offset and of motion alteration on users' behavioral adjustments and users' perception. In a second study, we evaluated variations in the sense of embodiment and the preference between the two control laws. Overall, both methods presented similar results in terms of performance and accuracy, yet, positional offsets strongly impacted motion profiles and users' performance. Both methods also resulted in comparable levels of embodiment. Finally, participants usually expressed strong preferences toward one of the two methods, but these choices were individual-specific and did not appear to be correlated solely with characteristics external to the individuals. Taken together, these results highlight the relevance of exploring the customization of motion control algorithms for avatars.

3.
J Biomech ; 157: 111703, 2023 08.
Article in English | MEDLINE | ID: mdl-37451207

ABSTRACT

Stepping strategies following external perturbations from different directions is investigated in this work. We analysed the effect of the perturbation angle as well as the level of awareness of individuals and characterised steps out of the sagittal plane between Loaded Side Steps (LSS), Unloaded Medial Steps (UMS) and Unloaded Crossover Steps (UCS). A novel experimental paradigm involving perturbations in different directions was performed on a group of 21 young adults (10 females, 11 males, 20-38 years). Participants underwent 30 randomised perturbations along 5 different angles with different levels of awareness of the upcoming perturbations (with and without wearing a sensory impairment device) for a total of 1260 recorded trials. Results showed that logistic models based on the minimal values of the Margin of Stability (MoS) or on the minimal values of the Time to boundary (Ttb) performed the best in the sagittal plane. However, their accuracy stayed above 79% regardless of the perturbation angle or level of awareness. Regarding the effect of the experimental condition, evidences of different balance recovery behaviours due to the variation of perturbation angles were exposed, but no significant effect of the level of awareness was observed. Finally, we proposed the Distance to Foot boundary (DtFb) as a relevant quantity to characterise the stepping strategies in response to perturbations out of the sagittal plane.


Subject(s)
Foot , Postural Balance , Female , Humans , Male , Young Adult , Biomechanical Phenomena , Foot/physiology , Postural Balance/physiology , Adult
4.
Article in English | MEDLINE | ID: mdl-37022858

ABSTRACT

Gaze behavior of virtual characters in video games and virtual reality experiences is a key factor of realism and immersion. Indeed, gaze plays many roles when interacting with the environment; not only does it indicate what characters are looking at, but it also plays an important role in verbal and non-verbal behaviors and in making virtual characters alive. Automated computing of gaze behaviors is however a challenging problem, and to date none of the existing methods are capable of producing close-to-real results in an interactive context. We therefore propose a novel method that leverages recent advances in several distinct areas related to visual saliency, attention mechanisms, saccadic behavior modelling, and head-gaze animation techniques. Our approach articulates these advances to converge on a multi-map saliency-driven model which offers real-time realistic gaze behaviors for non-conversational characters, together with additional user-control over customizable features to compose a wide variety of results. We first evaluate the benefits of our approach through an objective evaluation that confronts our gaze simulation with ground truth data using an eye-tracking dataset specifically acquired for this purpose. We then rely on subjective evaluation to measure the level of realism of gaze animations generated by our method, in comparison with gaze animations captured from real actors. Our results show that our method generates gaze behaviors that cannot be distinguished from captured gaze animations. Overall, we believe that these results will open the way for more natural and intuitive design of realistic and coherent gaze animations for real-time applications.

5.
IEEE Trans Vis Comput Graph ; 28(5): 2047-2057, 2022 05.
Article in English | MEDLINE | ID: mdl-35167468

ABSTRACT

In virtual reality, several manipulation techniques distort users' motions, for example to reach remote objects or increase precision. These techniques can become problematic when used with avatars, as they create a mismatch between the real performed action and the corresponding displayed action, which can negatively impact the sense of embodiment. In this paper, we propose to use a dual representation during anisomorphic interaction. A co-located representation serves as a spatial reference and reproduces the exact users' motion, while an interactive representation is used for distorted interaction. We conducted two experiments, investigating the use of dual representations with amplified motion (with the Go-Go technique) and decreased motion (with the PRISM technique). Two visual appearances for the interactive representation and the co-located one were explored. This exploratory study investigating dual representations in this context showed that people globally preferred having a single representation, but opinions diverged for the Go-Go technique. Also, we could not find significant differences in terms of performance. While interacting seemed more important than showing exact movements for agency during out-of-reach manipulation, people felt more in control of the realistic arm during close manipulation.


Subject(s)
Body Image , Virtual Reality , Computer Graphics , Hand , Humans , Movement
6.
IEEE Trans Vis Comput Graph ; 28(5): 2245-2255, 2022 05.
Article in English | MEDLINE | ID: mdl-35167473

ABSTRACT

Crowd motion data is fundamental for understanding and simulating realistic crowd behaviours. Such data is usually collected through controlled experiments to ensure that both desired individual interactions and collective behaviours can be observed. It is however scarce, due to ethical concerns and logistical difficulties involved in its gathering, and only covers a few typical crowd scenarios. In this work, we propose and evaluate a novel Virtual Reality based approach lifting the limitations of real-world experiments for the acquisition of crowd motion data. Our approach immerses a single user in virtual scenarios where he/she successively acts each crowd member. By recording the past trajectories and body movements of the user, and displaying them on virtual characters, the user progressively builds the overall crowd behaviour by him/herself. We validate the feasibility of our approach by replicating three real experiments, and compare both the resulting emergent phenomena and the individual interactions to existing real datasets. Our results suggest that realistic collective behaviours can naturally emerge from virtual crowd data generated using our approach, even though the variety in behaviours is lower than in real situations. These results provide valuable insights to the building of virtual crowd experiences, and reveal key directions for further improvements.


Subject(s)
Computer Graphics , Virtual Reality , Crowding , Female , Humans , Male , Motion , Movement
7.
IEEE Trans Vis Comput Graph ; 28(7): 2589-2601, 2022 07.
Article in English | MEDLINE | ID: mdl-33253117

ABSTRACT

Virtual reality (VR) is a valuable experimental tool for studying human movement, including the analysis of interactions during locomotion tasks for developing crowd simulation algorithms. However, these studies are generally limited to distant interactions in crowds, due to the difficulty of rendering realistic sensations of collisions in VR. In this article, we explore the use of wearable haptics to render contacts during virtual crowd navigation. We focus on the behavioral changes occurring with or without haptic rendering during a navigation task in a dense crowd, as well as on potential after-effects introduced by the use haptic rendering. Our objective is to provide recommendations for designing VR setup to study crowd navigation behavior. To the end, we designed an experiment (N=23) where participants navigated in a crowded virtual train station without, then with, and then again without haptic feedback of their collisions with virtual characters. Results show that providing haptic feedback improved the overall realism of the interaction, as participants more actively avoided collisions. We also noticed a significant after-effect in the users' behavior when haptic rendering was once again disabled in the third part of the experiment. Nonetheless, haptic feedback did not have any significant impact on the users' sense of presence and embodiment.


Subject(s)
Haptic Technology , Virtual Reality , Computer Graphics , Computer Simulation , Feedback , Humans
8.
IEEE Trans Vis Comput Graph ; 27(10): 4023-4038, 2021 10.
Article in English | MEDLINE | ID: mdl-32746257

ABSTRACT

In this article, we introduce a concept called "virtual co-embodiment", which enables a user to share their virtual avatar with another entity (e.g., another user, robot, or autonomous agent). We describe a proof-of-concept in which two users can be immersed from a first-person perspective in a virtual environment and can have complementary levels of control (total, partial, or none) over a shared avatar. In addition, we conducted an experiment to investigate the influence of users' level of control over the shared avatar and prior knowledge of their actions on the users' sense of agency and motor actions. The results showed that participants are good at estimating their real level of control but significantly overestimate their sense of agency when they can anticipate the motion of the avatar. Moreover, participants performed similar body motions regardless of their real control over the avatar. The results also revealed that the internal dimension of the locus of control, which is a personality trait, is negatively correlated with the user's perceived level of control. The combined results unfold a new range of applications in the fields of virtual-reality-based training and collaborative teleoperation, where users would be able to share their virtual body.


Subject(s)
Computer Graphics , Self Concept , Virtual Reality , Humans , Male , Psychomotor Performance
9.
IEEE Trans Vis Comput Graph ; 26(5): 2062-2072, 2020 May.
Article in English | MEDLINE | ID: mdl-32070975

ABSTRACT

In Virtual Reality, a number of studies have been conducted to assess the influence of avatar appearance, avatar control and user point of view on the Sense of Embodiment (SoE) towards a virtual avatar. However, such studies tend to explore each factor in isolation. This paper aims to better understand the inter-relations among these three factors by conducting a subjective matching experiment. In the presented experiment ( n=40), participants had to match a given "optimal" SoE avatar configuration (realistic avatar, full-body motion capture, first-person point of view), starting by a "minimal" SoE configuration (minimal avatar, no control, third-person point of view), by iteratively increasing the level of each factor. The choices of the participants provide insights about their preferences and perception over the three factors considered. Moreover, the subjective matching procedure was conducted in the context of four different interaction tasks with the goal of covering a wide range of actions an avatar can do in a VE. The paper also describes a baseline experiment ( n=20) which was used to define the number and order of the different levels for each factor, prior to the subjective matching experiment (e.g. different degrees of realism ranging from abstract to personalised avatars for the visual appearance). The results of the subjective matching experiment show that point of view and control levels were consistently increased by users before appearance levels when it comes to enhancing the SoE. Second, several configurations were identified with equivalent SoE as the one felt in the optimal configuration, but vary between the tasks. Taken together, our results provide valuable insights about which factors to prioritize in order to enhance the SoE towards an avatar in different tasks, and about configurations which lead to fulfilling SoE in VE.

SELECTION OF CITATIONS
SEARCH DETAIL
...