Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 7 de 7
Filter
Add more filters










Database
Language
Publication year range
1.
IEEE Trans Vis Comput Graph ; 30(5): 2785-2795, 2024 May.
Article in English | MEDLINE | ID: mdl-38437106

ABSTRACT

While data is vital to better understand and model interactions within human crowds, capturing real crowd motions is extremely challenging. Virtual Reality (VR) demonstrated its potential to help, by immersing users into either simulated virtual crowds based on autonomous agents, or within motion-capture-based crowds. In the latter case, users' own captured motion can be used to progressively extend the size of the crowd, a paradigm called Record-and-Replay (2R). However, both approaches demonstrated several limitations which impact the quality of the acquired crowd data. In this paper, we propose the new concept of contextual crowds to leverage both crowd simulation and the 2R paradigm towards more consistent crowd data. We evaluate two different strategies to implement it, namely a Replace-Record-Replay (3R) paradigm where users are initially immersed into a simulated crowd whose agents are successively replaced by the user's captured-data, and a Replace-Record-Replay-Responsive (4R) paradigm where the pre-recorded agents are additionally endowed with responsive capabilities. These two paradigms are evaluated through two real-world-based scenarios replicated in VR. Our results suggest that the behaviors observed in VR users with surrounding agents from the beginning of the recording process are made much more natural, enabling 3R or 4R paradigms to improve the consistency of captured crowd datasets.

2.
IEEE Trans Image Process ; 31: 1857-1869, 2022.
Article in English | MEDLINE | ID: mdl-35139016

ABSTRACT

We present See360, which is a versatile and efficient framework for 360° panoramic view interpolation using latent space viewpoint estimation. Most of the existing view rendering approaches only focus on indoor or synthetic 3D environments and render new views of small objects. In contrast, we suggest to tackle camera-centered view synthesis as a 2D affine transformation without using point clouds or depth maps, which enables an effective 360° panoramic scene exploration. Given a pair of reference images, the See360 model learns to render novel views by a proposed novel Multi-Scale Affine Transformer (MSAT), enabling the coarse-to-fine feature rendering. We also propose a Conditional Latent space AutoEncoder (C-LAE) to achieve view interpolation at any arbitrary angle. To show the versatility of our method, we introduce four training datasets, namely UrbanCity360, Archinterior360, HungHom360 and Lab360, which are collected from indoor and outdoor environments for both real and synthetic rendering. Experimental results show that the proposed method is generic enough to achieve real-time rendering of arbitrary views for all four datasets. In addition, our See360 model can be applied to view synthesis in the wild: with only a short extra training time (approximately 10 mins), and is able to render unknown real-world scenes. The superior performance of See360 opens up a promising direction for camera-centered view rendering and 360° panoramic view interpolation.

3.
IEEE Trans Vis Comput Graph ; 28(5): 2245-2255, 2022 05.
Article in English | MEDLINE | ID: mdl-35167473

ABSTRACT

Crowd motion data is fundamental for understanding and simulating realistic crowd behaviours. Such data is usually collected through controlled experiments to ensure that both desired individual interactions and collective behaviours can be observed. It is however scarce, due to ethical concerns and logistical difficulties involved in its gathering, and only covers a few typical crowd scenarios. In this work, we propose and evaluate a novel Virtual Reality based approach lifting the limitations of real-world experiments for the acquisition of crowd motion data. Our approach immerses a single user in virtual scenarios where he/she successively acts each crowd member. By recording the past trajectories and body movements of the user, and displaying them on virtual characters, the user progressively builds the overall crowd behaviour by him/herself. We validate the feasibility of our approach by replicating three real experiments, and compare both the resulting emergent phenomena and the individual interactions to existing real datasets. Our results suggest that realistic collective behaviours can naturally emerge from virtual crowd data generated using our approach, even though the variety in behaviours is lower than in real situations. These results provide valuable insights to the building of virtual crowd experiences, and reveal key directions for further improvements.


Subject(s)
Computer Graphics , Virtual Reality , Crowding , Female , Humans , Male , Motion , Movement
4.
IEEE Trans Vis Comput Graph ; 24(5): 1756-1769, 2018 05.
Article in English | MEDLINE | ID: mdl-28368822

ABSTRACT

Most mountain ranges are formed by the compression and folding of colliding tectonic plates. Subduction of one plate causes large-scale asymmetry while their layered composition (or stratigraphy) explains the multi-scale folded strata observed on real terrains. We introduce a novel interactive modeling technique to generate visually plausible, large scale terrains that capture these phenomena. Our method draws on both geological knowledge for consistency and on sculpting systems for user interaction. The user is provided hands-on control on the shape and motion of tectonic plates, represented using a new geologically-inspired model for the Earth crust. The model captures their volume preserving and complex folding behaviors under collision, causing mountains to grow. It generates a volumetric uplift map representing the growth rate of subsurface layers. Erosion and uplift movement are jointly simulated to generate the terrain. The stratigraphy allows us to render folded strata on eroded cliffs. We validated the usability of our sculpting interface through a user study, and compare the visual consistency of the earth crust model with geological simulation results and real terrains.

5.
IEEE Trans Vis Comput Graph ; 13(2): 213-34, 2007.
Article in English | MEDLINE | ID: mdl-17218740

ABSTRACT

Realistic hair modeling is a fundamental part of creating virtual humans in computer graphics. This paper surveys the state of the art in the major topics of hair modeling: hairstyling, hair simulation, and hair rendering. Because of the difficult, often unsolved problems that arise in all these areas, a broad diversity of approaches are used, each with strengths that make it appropriate for particular applications. We discuss each of these major topics in turn, presenting the unique challenges facing each area and describing solutions that have been presented over the years to handle these complex issues. Finally, we outline some of the remaining computational challenges in hair modeling.


Subject(s)
Computer Graphics , Hair/anatomy & histology , Hair/physiology , Imaging, Three-Dimensional/methods , Models, Anatomic , Models, Biological , Computer Simulation , Elasticity , Humans , Movement/physiology
7.
IEEE Trans Vis Comput Graph ; 10(6): 708-18, 2004.
Article in English | MEDLINE | ID: mdl-15527052

ABSTRACT

This research work is aimed toward the development of a VR-based trainer for colon cancer removal. It enables the surgeons to interactively view and manipulate the concerned virtual organs as during a real surgery. First, we present a method for animating the small intestine and the mesentery (the tissue that connects it to the main vessels) in real-time, thus enabling user interaction through virtual surgical tools during the simulation. We present a stochastic approach for fast collision detection in highly deformable, self-colliding objects. A simple and efficient response to collisions is also introduced in order to reduce the overall animation complexity. Second, we describe a new method based on generalized cylinders for fast rendering of the intestine. An efficient curvature detection method, along with an adaptive sampling algorithm, is presented. This approach, while providing improved tessellation without the classical self-intersection problem, also allows for high-performance rendering thanks to the new 3D skinning feature available in recent GPUs. The rendering algorithm is also designed to ensure a guaranteed frame rate. Finally, we present the quantitative results of the simulations and describe the qualitative feedback obtained from the surgeons.


Subject(s)
Computer-Assisted Instruction , Digestive System Surgical Procedures/methods , User-Computer Interface , Algorithms , Biomedical Engineering , Colonic Neoplasms/surgery , Computer Graphics , Computer Simulation , Computer Systems , Digestive System Surgical Procedures/statistics & numerical data , Humans , Intestine, Small/anatomy & histology , Intestine, Small/surgery , Models, Anatomic
SELECTION OF CITATIONS
SEARCH DETAIL
...