Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 9 de 9
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
IEEE Trans Vis Comput Graph ; 29(12): 5511-5522, 2023 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-36279345

RESUMO

Image-warping, a per-pixel deformation of one image into another, is an essential component in immersive visual experiences such as virtual reality or augmented reality. The primary issue with image warping is disocclusions, where occluded (and hence unknown) parts of the input image would be required to compose the output image. We introduce a new image warping method, Metameric image inpainting - an approach for hole-filling in real-time with foundations in human visual perception. Our method estimates image feature statistics of disoccluded regions from their neighbours. These statistics are inpainted and used to synthesise visuals in real-time that are less noticeable to study participants, particularly in peripheral vision. Our method offers speed improvements over the standard structured image inpainting methods while improving realism over colour-based inpainting such as push-pull. Hence, our work paves the way towards future applications such as depth image-based rendering, 6-DoF 360 rendering, and remote render-streaming.

2.
IEEE Comput Graph Appl ; 42(6): 116-122, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-37015713

RESUMO

We share our experiences of teaching virtual reality with Ubiq, an open-source system for building social virtual reality (VR). VR as a subject touches on many areas, including perception, human-computer interaction, and psychology. In our VE module, we consider all aspects of VR. In recent years, networked VR, and in particular social VR, has become increasingly relevant, at the same time as demand for online and hybrid teaching has increased. Commercial social virtual reality systems have proliferated, but for a number of reasons, this has not resulted in systems any more suitable for research and teaching. As a result we created Ubiq, a system for building social VR applications designed first for research and teaching. In this article, we describe how Ubiq came to be, and our experiences of using it in our virtual environments module over the last two years.

3.
IEEE Trans Vis Comput Graph ; 28(9): 3138-3153, 2022 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-33465027

RESUMO

Distributed virtual environments (DVEs) are challenging to create as the goals of consistency and responsiveness become contradictory under increasing latency. DVEs have been considered as both distributed transactional databases and force-reflection systems. Both are good approaches, but they do have drawbacks. Transactional systems do not support Level 3 (L3) collaboration: manipulating the same degree-of-freedom at the same time. Force-reflection requires a client-server architecture and stabilisation techniques. With Consensus Based Networking (CBN), we suggest DVEs be considered as a distributed data-fusion problem. Many simulations run in parallel and exchange their states, with remote states integrated with continous authority. Over time the exchanges average out local differences, performing a distribued-average of a consistent, shared state. CBN aims to build simulations that are highly responsive, but consistent enough for use cases such as the piano-movers problem. CBN's support for heterogeneous nodes can transparently couple different input methods, avoid the requirement of determinism, and provide more options for personal control over the shared experience. Our work is early, however we demonstrate many successes, including L3 collaboration in room-scale VR, 1000's of interacting objects, complex configurations such as stacking, and transparent coupling of haptic devices. These have been shown before, but each with a different technique; CBN supports them all within a single, unified system.

4.
IEEE Trans Vis Comput Graph ; 27(5): 2691-2701, 2021 May.
Artigo em Inglês | MEDLINE | ID: mdl-33750697

RESUMO

Mobile HMDs must sacrifice compute performance to achieve ergonomic and power requirements for extended use. Consequently, applications must either reduce rendering and simulation complexity - along with the richness of the experience - or offload complexity to a server. Within the context of edge-computing, a popular way to do this is through render streaming. Render streaming has been demonstrated for desktops and consoles. It has also been explored for HMDs. However, the latency requirements of head tracking make this application much more challenging. While mobile GPUs are not yet as capable as their desktop counterparts, we note that they are becoming more powerful and efficient. With the hard requirements of VR, it is worth continuing to investigate what schemes could optimally balance load, latency and quality. We propose an alternative we call edge-physics: streaming at the scene-graph level from a simulation running on edge-resources, analogous to cluster rendering. Scene streaming is not only straightforward, but compute and bandwidth efficient. The most demanding loops run locally. Jobs that hit the power-wall of mobile CPUs are off-loaded, while improving GPUs are leveraged, maximising compute utilisation. In this paper we create a prototypical implementation and evaluate its potential in terms of fidelity, bandwidth and performance. We show that an effective system which maintains high consistencies on typical edge-links can be easily built, but that some traditional concepts are not applicable, and a better understanding of the perception of motion is required to evaluate such a system comprehensively.

5.
IEEE Comput Graph Appl ; 40(3): 94-104, 2020.
Artigo em Inglês | MEDLINE | ID: mdl-32356731

RESUMO

Recent years have seen a resurgence of virtual reality (VR), sparked by the repurposing of low-cost COTS components. VR aims to generate stimuli that appear to come from a source other than the interface through which they are delivered. The synthetic stimuli replace real-world stimuli, and transport the user to another, perhaps imaginary, "place." To do this, we must overcome many challenges, often related to matching the synthetic stimuli to the expectations and behavior of the real world. One way in which the stimuli can fail is its latency-- the time between a user's action and the computer's response. We constructed a novel VR renderer, that optimized latency above all else. Our prototype allowed us to explore how latency affects human-computer interaction. We had to completely reconsider the interaction between time, space, and synchronization on displays and in the traditional graphics pipeline. Using a specialized architecture--dataflow computing--we combined consumer, industrial, and prototype components to create an integrated 1:1 room-scale VR system with a latency of under 3 ms. While this was prototype hardware, the considerations in achieving this performance inform the design of future VR pipelines, and our human factors studies have provided new and sometimes surprising contributions to the body of knowledge on latency in HCI.

6.
IEEE Trans Vis Comput Graph ; 25(8): 2611-2622, 2019 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-30047884

RESUMO

Many techniques facilitate real-time collision detection against complex models. These typically work by pre-computing information about the spatial distribution of geometry into a form that can be quickly queried. When models deform though, expensive pre-computations are impractical. We present radial fields: a variant of distance fields parameterised in cylindrical space, rather than Cartesian space. This 2D parameterisation significantly reduces the memory and computation requirements of the field, while introducing minimal overhead in collision detection tests. The interior of the mesh is defined implicitly for the entire domain. Importantly, it maps well to the hardware rasteriser of the GPU. Radial fields are much more application-specific than traditional distance fields. For these applications - such as collision detection with articulated characters-however, the benefits are substantial.

7.
IEEE Trans Vis Comput Graph ; 22(5): 1605-15, 2016 May.
Artigo em Inglês | MEDLINE | ID: mdl-27045915

RESUMO

Latency is detrimental to interactive systems, especially pseudo-physical systems that emulate real-world behaviour. It prevents users from making quick corrections to their movement, and causes their experience to deviate from their expectations. Latency is a result of the processing and transport delays inherent in current computer systems. As such, while a number of studies have hypothesized that any latency will have a degrading effect, few have been able to test this for latencies less than ∼ 50 ms. In this study we investigate the effects of latency on pointing and steering tasks. We design an apparatus with a latency lower than typical interactive systems, using it to perform interaction tasks based on Fitts's law and the Steering law. We find evidence that latency begins to affect performance at ∼ 16 ms, and that the effect is non-linear. Further, we find latency does not affect the various components of an aiming motion equally. We propose a three stage characterisation of pointing movements with each stage affected independently by latency. We suggest that understanding how users execute movement is essential for studying latency at low levels, as high level metrics such as total movement time may be misleading.

8.
IEEE Trans Vis Comput Graph ; 22(4): 1377-86, 2016 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-26780798

RESUMO

Latency - the delay between a user's action and the response to this action - is known to be detrimental to virtual reality. Latency is typically considered to be a discrete value characterising a delay, constant in time and space - but this characterisation is incomplete. Latency changes across the display during scan-out, and how it does so is dependent on the rendering approach used. In this study, we present an ultra-low latency real-time ray-casting renderer for virtual reality, implemented on an FPGA. Our renderer has a latency of ~1 ms from 'tracker to pixel'. Its frameless nature means that the region of the display with the lowest latency immediately follows the scan-beam. This is in contrast to frame-based systems such as those using typical GPUs, for which the latency increases as scan-out proceeds. Using a series of high and low speed videos of our system in use, we confirm its latency of ~1 ms. We examine how the renderer performs when driving a traditional sequential scan-out display on a readily available HMO, the Oculus Rift OK2. We contrast this with an equivalent apparatus built using a GPU. Using captured human head motion and a set of image quality measures, we assess the ability of these systems to faithfully recreate the stimuli of an ideal virtual reality system - one with a zero latency tracker, renderer and display running at 1 kHz. Finally, we examine the results of these quality measures, and how each rendering approach is affected by velocity of movement and display persistence. We find that our system, with a lower average latency, can more faithfully draw what the ideal virtual reality system would. Further, we find that with low display persistence, the sensitivity to velocity of both systems is lowered, but that it is much lower for ours.

9.
IEEE Trans Vis Comput Graph ; 20(4): 616-25, 2014 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-24650989

RESUMO

Latency of interactive computer systems is a product of the processing, transport and synchronisation delays inherent to the components that create them. In a virtual environment (VE) system, latency is known to be detrimental to a user's sense of immersion, physical performance and comfort level. Accurately measuring the latency of a VE system for study or optimisation, is not straightforward. A number of authors have developed techniques for characterising latency, which have become progressively more accessible and easier to use. In this paper, we characterise these techniques. We describe a simple mechanical simulator designed to simulate a VE with various amounts of latency that can be finely controlled (to within 3ms). We develop a new latency measurement technique called Automated Frame Counting to assist in assessing latency using high speed video (to within 1ms). We use the mechanical simulator to measure the accuracy of Steed's and Di Luca's measurement techniques, proposing improvements where they may be made. We use the methods to measure latency of a number of interactive systems that may be of interest to the VE engineer, with a significant level of confidence. All techniques were found to be highly capable however Steed's Method is both accurate and easy to use without requiring specialised hardware.


Assuntos
Gráficos por Computador/instrumentação , Estimulação Luminosa/instrumentação , Processamento de Sinais Assistido por Computador/instrumentação , Interface Usuário-Computador , Gravação em Vídeo/instrumentação , Desenho de Equipamento , Análise de Falha de Equipamento , Armazenamento e Recuperação da Informação/métodos
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...