Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 5 de 5
Filtrar
Más filtros











Base de datos
Intervalo de año de publicación
1.
IEEE Trans Vis Comput Graph ; 30(5): 2444-2453, 2024 May.
Artículo en Inglés | MEDLINE | ID: mdl-38437083

RESUMEN

Virtual Reality (VR) offers an immersive 3D digital environment, but enabling natural walking sensations without the constraints of physical space remains a technological challenge. Previous VR locomotion methods, including game controller, teleportation, treadmills, walking-in-place, and redirected walking (RDW), have made strides towards overcoming this challenge. However, these methods also face limitations such as possible unnaturalness, additional hardware requirements, or motion sickness risks. This paper introduces "Spatial Contraction (SC)", an innovative VR locomotion method inspired by the phenomenon of Lorentz contraction in Special Relativity. Similar to the Lorentz contraction, our SC contracts the virtual space along the user's velocity direction in response to velocity variation. The virtual space contracts more when the user's speed is high, whereas minimal or no contraction happens at low speeds. We provide a virtual space transformation method for spatial contraction and optimize the user experience in smoothness and stability. Through SC, VR users can effectively traverse a longer virtual distance with a shorter physical walking. Different from locomotion gains, the spatial contraction effect is observable by the user and aligns with their intentions, so there is no inconsistency between the user's proprioception and visual perception. SC is a general locomotion method that has no special requirements for VR scenes. The experimental results of our live user studies in various virtual scenarios demonstrate that SC has a significant effect in reducing both the number of resets and the physical walking distance users need to cover. Furthermore, experiments have also demonstrated that SC has the potential for integration with existing locomotion techniques such as RDW.

2.
IEEE Trans Vis Comput Graph ; 30(5): 2693-2702, 2024 May.
Artículo en Inglés | MEDLINE | ID: mdl-38437103

RESUMEN

Redirected walking (RDW) facilitates user navigation within expansive virtual spaces despite the constraints of limited physical spaces. It employs discrepancies between human visual-proprioceptive sensations, known as gains, to enable the remapping of virtual and physical environments. In this paper, we explore how to apply rotation gain while the user is walking. We propose to apply a rotation gain to let the user rotate by a different angle when reciprocating from a previous head rotation, to achieve the aim of steering the user to a desired direction. To apply the gains imperceptibly based on such a Bidirectional Rotation gain Difference (BiRD), we conduct both measurement and verification experiments on the detection thresholds of the rotation gain for reciprocating head rotations during walking. Unlike previous rotation gains which are measured when users are turning around in place (standing or sitting), BiRD is measured during users' walking. Our study offers a critical assessment of the acceptable range of rotational mapping differences for different rotational orientations across the user's walking experience, contributing to an effective tool for redirecting users in virtual environments.


Asunto(s)
Gráficos por Computador , Caminata , Humanos , Animales , Orientación , Ambiente , Aves
3.
IEEE Trans Vis Comput Graph ; 30(4): 1916-1926, 2024 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-37028008

RESUMEN

With the recent rise of Metaverse, online multiplayer VR applications are becoming increasingly prevalent worldwide. However, as multiple users are located in different physical environments, different reset frequencies and timings can lead to serious fairness issues for online collaborative/competitive VR applications. For the fairness of online VR apps/games, an ideal online RDW strategy must make the locomotion opportunities of different users equal, regardless of different physical environment layouts. The existing RDW methods lack the scheme to coordinate multiple users in different PEs, and thus have the issue of triggering too many resets for all the users under the locomotion fairness constraint. We propose a novel multi-user RDW method that is able to significantly reduce the overall reset number and give users a better immersive experience by providing a fair exploration. Our key idea is to first find out the "bottleneck" user that may cause all users to be reset and estimate the time to reset given the users' next targets, and then redirect all the users to favorable poses during that maximized bottleneck time to ensure the subsequent resets can be postponed as much as possible. More particularly, we develop methods to estimate the time of possibly encountering obstacles and the reachable area for a specific pose to enable the prediction of the next reset caused by any user. Our experiments and user study found that our method outperforms existing RDW methods in online VR applications.

4.
IEEE Trans Vis Comput Graph ; 28(11): 3778-3787, 2022 11.
Artículo en Inglés | MEDLINE | ID: mdl-36074875

RESUMEN

Rapidly developing Redirected Walking (ROW) technologies have enabled VR applications to immerse users in large virtual environments (VE) while actually walking in relatively small physical environments (PE). When an unavoidable collision emerges in a PE, the ROW controller suspends the user's immersive experience and resets the user to a new direction in PE. Existing ROW methods mainly aim to reduce the number of resets. However, from the perspective of the user experience, when users are about to reach a point of interest (POI) in a VE, reset interruptions are more likely to have an impact on user experience. In this paper, we propose a new ROW method, aiming to keep resets occurring at a longer distance from the virtual target, as well as to reduce the number of resets. Simulation experiments and real user studies demonstrate that our method outperforms state-of-the-art ROW methods in the number of resets and dramatically increases the distance between the reset locations and the virtual targets.


Asunto(s)
Gráficos por Computador , Interfaz Usuario-Computador , Caminata , Simulación por Computador , Ambiente
5.
IEEE Trans Image Process ; 29: 214-224, 2020.
Artículo en Inglés | MEDLINE | ID: mdl-31331884

RESUMEN

Compositing is one of the most important editing operations for images and videos. The process of improving the realism of composite results is often called harmonization. Previous approaches for harmonization mainly focus on images. In this paper, we take one step further to attack the problem of video harmonization. Specifically, we train a convolutional neural network in an adversarial way, exploiting a pixel-wise disharmony discriminator to achieve more realistic harmonized results and introducing a temporal loss to increase temporal consistency between consecutive harmonized frames. Thanks to the pixel-wise disharmony discriminator, we are also able to relieve the need of input foreground masks. Since existing video datasets which have ground-truth foreground masks and optical flows are not sufficiently large, we propose a simple yet efficient method to build up a synthetic dataset supporting supervised training of the proposed adversarial network. The experiments show that training on our synthetic dataset generalizes well to the real-world composite dataset. In addition, our method successfully incorporates temporal consistency during training and achieves more harmonious visual results than previous methods.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA