Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 5 de 5
Filter
Add more filters










Database
Language
Publication year range
1.
IEEE Trans Vis Comput Graph ; 30(5): 2785-2795, 2024 May.
Article in English | MEDLINE | ID: mdl-38437106

ABSTRACT

While data is vital to better understand and model interactions within human crowds, capturing real crowd motions is extremely challenging. Virtual Reality (VR) demonstrated its potential to help, by immersing users into either simulated virtual crowds based on autonomous agents, or within motion-capture-based crowds. In the latter case, users' own captured motion can be used to progressively extend the size of the crowd, a paradigm called Record-and-Replay (2R). However, both approaches demonstrated several limitations which impact the quality of the acquired crowd data. In this paper, we propose the new concept of contextual crowds to leverage both crowd simulation and the 2R paradigm towards more consistent crowd data. We evaluate two different strategies to implement it, namely a Replace-Record-Replay (3R) paradigm where users are initially immersed into a simulated crowd whose agents are successively replaced by the user's captured-data, and a Replace-Record-Replay-Responsive (4R) paradigm where the pre-recorded agents are additionally endowed with responsive capabilities. These two paradigms are evaluated through two real-world-based scenarios replicated in VR. Our results suggest that the behaviors observed in VR users with surrounding agents from the beginning of the recording process are made much more natural, enabling 3R or 4R paradigms to improve the consistency of captured crowd datasets.

2.
Article in English | MEDLINE | ID: mdl-37022858

ABSTRACT

Gaze behavior of virtual characters in video games and virtual reality experiences is a key factor of realism and immersion. Indeed, gaze plays many roles when interacting with the environment; not only does it indicate what characters are looking at, but it also plays an important role in verbal and non-verbal behaviors and in making virtual characters alive. Automated computing of gaze behaviors is however a challenging problem, and to date none of the existing methods are capable of producing close-to-real results in an interactive context. We therefore propose a novel method that leverages recent advances in several distinct areas related to visual saliency, attention mechanisms, saccadic behavior modelling, and head-gaze animation techniques. Our approach articulates these advances to converge on a multi-map saliency-driven model which offers real-time realistic gaze behaviors for non-conversational characters, together with additional user-control over customizable features to compose a wide variety of results. We first evaluate the benefits of our approach through an objective evaluation that confronts our gaze simulation with ground truth data using an eye-tracking dataset specifically acquired for this purpose. We then rely on subjective evaluation to measure the level of realism of gaze animations generated by our method, in comparison with gaze animations captured from real actors. Our results show that our method generates gaze behaviors that cannot be distinguished from captured gaze animations. Overall, we believe that these results will open the way for more natural and intuitive design of realistic and coherent gaze animations for real-time applications.

3.
Behav Res Methods ; 55(6): 2940-2959, 2023 09.
Article in English | MEDLINE | ID: mdl-36002630

ABSTRACT

In the process of making a movie, directors constantly care about where the spectator will look on the screen. Shot composition, framing, camera movements, or editing are tools commonly used to direct attention. In order to provide a quantitative analysis of the relationship between those tools and gaze patterns, we propose a new eye-tracking database, containing gaze-pattern information on movie sequences, as well as editing annotations, and we show how state-of-the-art computational saliency techniques behave on this dataset. In this work, we expose strong links between movie editing and spectators gaze distributions, and open several leads on how the knowledge of editing information could improve human visual attention modeling for cinematic content. The dataset generated and analyzed for this study is available at https://github.com/abruckert/eye_tracking_filmmaking.


Subject(s)
Eye Movements , Motion Pictures , Humans , Movement , Fixation, Ocular
4.
IEEE Trans Vis Comput Graph ; 28(5): 2245-2255, 2022 05.
Article in English | MEDLINE | ID: mdl-35167473

ABSTRACT

Crowd motion data is fundamental for understanding and simulating realistic crowd behaviours. Such data is usually collected through controlled experiments to ensure that both desired individual interactions and collective behaviours can be observed. It is however scarce, due to ethical concerns and logistical difficulties involved in its gathering, and only covers a few typical crowd scenarios. In this work, we propose and evaluate a novel Virtual Reality based approach lifting the limitations of real-world experiments for the acquisition of crowd motion data. Our approach immerses a single user in virtual scenarios where he/she successively acts each crowd member. By recording the past trajectories and body movements of the user, and displaying them on virtual characters, the user progressively builds the overall crowd behaviour by him/herself. We validate the feasibility of our approach by replicating three real experiments, and compare both the resulting emergent phenomena and the individual interactions to existing real datasets. Our results suggest that realistic collective behaviours can naturally emerge from virtual crowd data generated using our approach, even though the variety in behaviours is lower than in real situations. These results provide valuable insights to the building of virtual crowd experiences, and reveal key directions for further improvements.


Subject(s)
Computer Graphics , Virtual Reality , Crowding , Female , Humans , Male , Motion , Movement
5.
IEEE Trans Haptics ; 8(1): 114-8, 2015.
Article in English | MEDLINE | ID: mdl-25532190

ABSTRACT

Today haptic feedback can be designed and associated to audiovisual content (haptic-audiovisuals or HAV). Although there are multiple means to create individual haptic effects, the issue of how to properly adapt such effects on force-feedback devices has not been addressed and is mostly a manual endeavor. We propose a new approach for the haptic rendering of HAV, based on a washout filter for force-feedback devices. A body model and an inverse kinematics algorithm simulate the user's kinesthetic perception. Then, the haptic rendering is adapted in order to handle transitions between haptic effects and to optimize the amplitude of effects regarding the device capabilities. Results of a user study show that this new haptic rendering can successfully improve the HAV experience.


Subject(s)
Feedback , Kinesthesis/physiology , Touch , Visual Perception/physiology , Algorithms , Computer Simulation , Humans , Models, Biological , Physical Stimulation/methods , Touch/physiology , User-Computer Interface
SELECTION OF CITATIONS
SEARCH DETAIL
...