Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 15 de 15
Filter
Add more filters










Publication year range
1.
Article in English | MEDLINE | ID: mdl-38170655

ABSTRACT

Alphanumeric and special characters are essential during text entry. Text entry in virtual reality (VR) is usually performed on a virtual Qwerty keyboard to minimize the need to learn new layouts. As such, entering capitals, symbols, and numbers in VR is often a direct migration from a physical/touchscreen Qwerty keyboard-that is, using the mode-switching keys to switch between different types of characters and symbols. However, there are inherent differences between a keyboard in VR and a physical/touchscreen keyboard, and as such, a direct adaptation of mode-switching via switch keys may not be suitable for VR. The high flexibility afforded by VR opens up more possibilities for entering alphanumeric and special characters using the Qwerty layout. In this work, we designed two controller-based raycasting text entry methods for alphanumeric and special characters input (Layer-ButtonSwitch and Key-ButtonSwitch) and compared them with two other methods (Standard Qwerty Keyboard and Layer-PointSwitch) that were derived from physical and soft Qwerty keyboards. We explored the performance and user preference of these four methods via two user studies (one short-term and one prolonged use), where participants were instructed to input text containing alphanumeric and special characters. Our results show that Layer-ButtonSwitch led to the highest statistically significant performance, followed by Key-ButtonSwitch and Standard Qwerty Keyboard, while Layer-PointSwitch had the slowest speed. With continuous practice, participants' performance using Key-ButtonSwitch reached that of Layer-ButtonSwitch. Further, the results show that the key-level layout used in Key-ButtonSwitch led users to parallel mode switching and character input operations because this layout showed all characters on one layer. We distill three recommendations from th results that can help guide the design of text entry techniques for alphanumeric and special characters in VR.

2.
IEEE Trans Vis Comput Graph ; 29(11): 4567-4577, 2023 Nov.
Article in English | MEDLINE | ID: mdl-37792648

ABSTRACT

Integrated hand-tracking on modern virtual reality (VR) headsets can be readily exploited to deliver mid-air virtual input surfaces for text entry. These virtual input surfaces can closely replicate the experience of typing on a Qwerty keyboard on a physical touchscreen, thereby allowing users to leverage their pre-existing typing skills. However, the lack of passive haptic feedback, unconstrained user motion, and potential tracking inaccuracies or observability issues encountered in this interaction setting typically degrades the accuracy of user articulations. We present a comprehensive exploration of error-tolerant probabilistic hand-based input methods to support effective text input on a mid-air virtual Qwerty keyboard. Over three user studies we examine the performance potential of hand-based text input under both gesture and touch typing paradigms. We demonstrate typical entry rates in the range of 20 to 30 wpm and average peak entry rates of 40 to 45 wpm.

3.
IEEE Trans Vis Comput Graph ; 29(11): 4600-4610, 2023 Nov.
Article in English | MEDLINE | ID: mdl-37782601

ABSTRACT

Conventional desktop applications provide users with hotkeys as shortcuts for triggering different functionality. In this paper we consider what constitutes an effective parallel to hotkeys in a 3D interaction space where the input modality is no longer limited to the use of a keyboard. We propose HotGestures: a gesture-based interaction system for rapid tool selection and usage. Hand gestures are frequently used during human communication to convey information and provide natural associations with meaning. HotGestures provide shortcuts for users to seamlessly activate and use virtual tools by performing hand gestures. This approach naturally complements conventional menu interactions. We evaluate the potential of HotGestures in a set of two user studies and observe that our gesture-based technique provides fast and effective shortcuts for tool selection and usage. Participants found HotGestures to be distinctive, fast, and easy to use while also complementing conventional menu-based interaction.

4.
IEEE Trans Vis Comput Graph ; 29(11): 4622-4632, 2023 Nov.
Article in English | MEDLINE | ID: mdl-37782613

ABSTRACT

We present a fast mid-air gesture keyboard for head-mounted optical see-through augmented reality (OST AR) that supports users in articulating word patterns by merely moving their own physical index finger in relation to a virtual keyboard plane without a need to indirectly control a visual 2D cursor on a keyboard plane. To realize this, we introduce a novel decoding method that directly translates users' three-dimensional fingertip gestural trajectories into their intended text. We evaluate the efficacy of the system in three studies that investigate various design aspects, such as immediate efficacy, accelerated learning, and whether it is possible to maintain performance without providing visual feedback. We find that the new 3D trajectory decoding design results in significant improvements in entry rates while maintaining low error rates. In addition, we demonstrate that users can maintain their performance even without fingertip and gesture trace visualization.

5.
Article in English | MEDLINE | ID: mdl-37639421

ABSTRACT

Optical see-through head-mounted displays (OST HMDs) are a popular output medium for mobile Augmented Reality (AR) applications. To date, they lack efficient text entry techniques. Smartphones are a major text entry medium in mobile contexts but attentional demands can contribute to accidents while typing on the go. Mobile multi-display ecologies, such as combined OST HMD-smartphone systems, promise performance and situation awareness benefits over single-device use. We study the joint performance of text entry on mobile phones with text output on optical see-through head-mounted displays. A series of five experiments with a total of 86 participants indicate that, as of today, the challenges in such a joint interactive system outweigh the potential benefits.

6.
IEEE Trans Vis Comput Graph ; 28(11): 3618-3628, 2022 11.
Article in English | MEDLINE | ID: mdl-36048982

ABSTRACT

In this paper we examine the task of key gesture spotting: accurate and timely online recognition of hand gestures. We specifically seek to address two key challenges faced by developers when integrating key gesture spotting functionality into their applications. These are: i) achieving high accuracy and zero or negative activation lag with single-time activation; and ii) avoiding the requirement for deep domain expertise in machine learning. We address the first challenge by proposing a key gesture spotting architecture consisting of a novel gesture classifier model and a novel single-time activation algorithm. This key gesture spotting architecture was evaluated on four separate hand skeleton gesture datasets, and achieved high recognition accuracy with early detection. We address the second challenge by encapsulating different data processing and augmentation strategies, as well as the proposed key gesture spotting architecture, into a graphical user interface and an application programming interface. Two user studies demonstrate that developers are able to efficiently construct custom recognizers using both the graphical user interface and the application programming interface.


Subject(s)
Augmented Reality , Gestures , Pattern Recognition, Automated , Computer Graphics , Algorithms , Hand
7.
IEEE Trans Vis Comput Graph ; 28(11): 3810-3820, 2022 11.
Article in English | MEDLINE | ID: mdl-36044497

ABSTRACT

Virtual Reality (VR) provides new possibilities for modern knowledge work. However, the potential advantages of virtual work environments can only be used if it is feasible to work in them for an extended period of time. Until now, there are limited studies of long-term effects when working in VR. This paper addresses the need for understanding such long-term effects. Specifically, we report on a comparative study $i$, in which participants were working in VR for an entire week-for five days, eight hours each day-as well as in a baseline physical desktop environment. This study aims to quantify the effects of exchanging a desktop-based work environment with a VR-based environment. Hence, during this study, we do not present the participants with the best possible VR system but rather a setup delivering a comparable experience to working in the physical desktop environment. The study reveals that, as expected, VR results in significantly worse ratings across most measures. Among other results, we found concerning levels of simulator sickness, below average usability ratings and two participants dropped out on the first day using VR, due to migraine, nausea and anxiety. Nevertheless, there is some indication that participants gradually overcame negative first impressions and initial discomfort. Overall, this study helps lay the groundwork for subsequent research, by clearly highlighting current shortcomings and identifying opportunities for improving the experience of working in VR.


Subject(s)
Computer Graphics , Virtual Reality , Humans , User-Computer Interface
8.
IEEE Trans Vis Comput Graph ; 28(5): 2069-2079, 2022 05.
Article in English | MEDLINE | ID: mdl-35167458

ABSTRACT

Virtual Reality (VR) has the potential to support mobile knowledge workers by complementing traditional input devices with a large three-dimensional output space and spatial input. Previous research on supporting VR knowledge work explored domains such as text entry using physical keyboards and spreadsheet interaction using combined pen and touch input. Inspired by such work, this paper probes the VR design space for authoring presentations in mobile settings. We propose PoVRPoint-a set of tools coupling pen- and touch-based editing of presentations on mobile devices, such as tablets, with the interaction capabilities afforded by VR. We study the utility of extended display space to, for example, assist users in identifying target slides, supporting spatial manipulation of objects on a slide, creating animations, and facilitating arrangements of multiple, possibly occluded shapes or objects. Among other things, our results indicate that 1) the wide field of view afforded by VR results in significantly faster target slide identification times compared to a tablet-only interface for visually salient targets; and 2) the three-dimensional view in VR enables significantly faster object reordering in the presence of occlusion compared to two baseline interfaces. A user study further confirmed that the interaction techniques were found to be usable and enjoyable.


Subject(s)
User-Computer Interface , Virtual Reality , Computer Graphics , Humans , Touch
9.
IEEE Comput Graph Appl ; 41(6): 143-151, 2021.
Article in English | MEDLINE | ID: mdl-34890314

ABSTRACT

Recent advancements in virtual reality (VR) may help unlock the full potential offered by 3-D photorealistic models generated using state-of-the-art photogrammetric methods. Using VR to carry out analyses on photogrammetric models has the potential to assist the user in performing basic offline engineering inspection of digital twins-digitized representations of real-world objects and structures. However, for such benefits to materialize, it is necessary to create suitable interactive systems for working with photogrammetric models in VR. To this end, this article presents PhotoTwinVR-an immersive gesture-controlled system for manipulation and inspection of 3-D photogrammetric models of physical objects in VR. An observational study with three domain experts validates the feasibility of the system design for practical use-cases involving offline inspections of pipelines and other 3-D structures.


Subject(s)
Virtual Reality , Photogrammetry
10.
IEEE Trans Vis Comput Graph ; 27(11): 4140-4149, 2021 11.
Article in English | MEDLINE | ID: mdl-34449380

ABSTRACT

Accurately modelling user behaviour has the potential to significantly improve the quality of human-computer interaction. Traditionally, these models are carefully hand-crafted to approximate specific aspects of well-documented user behaviour. This limits their availability in virtual and augmented reality where user behaviour is often not yet well understood. Recent efforts have demonstrated that reinforcement learning can approximate human behaviour during simple goal-oriented reaching tasks. We build on these efforts and demonstrate that reinforcement learning can also approximate user behaviour in a complex mid-air interaction task: typing on a virtual keyboard. We present the first reinforcement learning-based user model for mid-air and surface-aligned typing on a virtual keyboard. Our model is shown to replicate high-level human typing behaviour. We demonstrate that this approach may be used to augment or replace human testing during the validation and development of virtual keyboards.


Subject(s)
Computer Graphics , User-Computer Interface , Equipment Design , Humans , Learning , Motivation
11.
Article in English | MEDLINE | ID: mdl-33017290

ABSTRACT

Virtual Reality (VR) has the potential to transform knowledge work. One advantage of VR knowledge work is that it allows extending 2D displays into the third dimension, enabling new operations, such as selecting overlapping objects or displaying additional layers of information. On the other hand, mobile knowledge workers often work on established mobile devices, such as tablets, limiting interaction with those devices to a small input space. This challenge of a constrained input space is intensified in situations when VR knowledge work is situated in cramped environments, such as airplanes and touchdown spaces. In this paper, we investigate the feasibility of interacting jointly between an immersive VR head-mounted display and a tablet within the context of knowledge work. Specifically, we 1) design, implement and study how to interact with information that reaches beyond a single physical touchscreen in VR; 2) design and evaluate a set of interaction concepts; and 3) build example applications and gather user feedback on those applications.

12.
IEEE Trans Vis Comput Graph ; 25(11): 3190-3201, 2019 Nov.
Article in English | MEDLINE | ID: mdl-31403423

ABSTRACT

Physical keyboards are common peripherals for personal computers and are efficient standard text entry devices. Recent research has investigated how physical keyboards can be used in immersive head-mounted display-based Virtual Reality (VR). So far, the physical layout of keyboards has typically been transplanted into VR for replicating typing experiences in a standard desktop environment. In this paper, we explore how to fully leverage the immersiveness of VR to change the input and output characteristics of physical keyboard interaction within a VR environment. This allows individual physical keys to be reconfigured to the same or different actions and visual output to be distributed in various ways across the VR representation of the keyboard. We explore a set of input and output mappings for reconfiguring the virtual presentation of physical keyboards and probe the resulting design space by specifically designing, implementing and evaluating nine VR-relevant applications: emojis, languages and special characters, application shortcuts, virtual text processing macros, a window manager, a photo browser, a whack-a-mole game, secure password entry and a virtual touch bar. We investigate the feasibility of the applications in a user study with 20 participants and find that, among other things, they are usable in VR. We discuss the limitations and possibilities of remapping the input and output characteristics of physical keyboards in VR based on empirical findings and analysis and suggest future research directions in this area.

13.
IEEE Trans Pattern Anal Mach Intell ; 41(11): 2756-2769, 2019 11.
Article in English | MEDLINE | ID: mdl-30130177

ABSTRACT

Ticker is a probabilistic stereophonic single-switch text entry method for visually-impaired users with motor disabilities who rely on single-switch scanning systems to communicate. Such scanning systems are sensitive to a variety of noise sources, which are inevitably introduced in practical use of single-switch systems. Ticker uses a novel interaction model based on stereophonic sound coupled with statistical models for robust inference of the user's intended text in the presence of noise. As a consequence of its design, Ticker is resilient to noise and therefore a practical solution for single-switch scanning systems. Ticker's performance is validated using a combination of simulations and empirical user studies.


Subject(s)
Communication Aids for Disabled , Software , Visually Impaired Persons , Acoustic Stimulation , Algorithms , Bayes Theorem , Computer Simulation , Humans , Motor Disorders , User-Computer Interface
14.
IEEE Comput Graph Appl ; 38(6): 125-133, 2018.
Article in English | MEDLINE | ID: mdl-30668459

ABSTRACT

Virtual reality has the potential to change the way we work. We envision the future office worker to be able to work productively everywhere solely using portable standard input devices and immersive head-mounted displays. Virtual reality has the potential to enable this, by allowing users to create working environments of their choice and by relieving them from physical world limitations, such as constrained space or noisy environments. In this paper, we investigate opportunities and challenges for realizing this vision and discuss implications from recent findings of text entry in virtual reality as a core office task.

15.
IEEE Trans Vis Comput Graph ; 15(4): 696-702, 2009.
Article in English | MEDLINE | ID: mdl-19423892

ABSTRACT

Space time cube representation is an information visualization technique where spatiotemporal data points are mapped into a cube. Information visualization researchers have previously argued that space time cube representation is beneficial in revealing complex spatiotemporal patterns in a data set to users. The argument is based on the fact that both time and spatial information are displayed simultaneously to users, an effect difficult to achieve in other representations. However, to our knowledge the actual usefulness of space time cube representation in conveying complex spatiotemporal patterns to users has not been empirically validated. To fill this gap, we report on a between-subjects experiment comparing novice users' error rates and response times when answering a set of questions using either space time cube or a baseline 2D representation. For some simple questions, the error rates were lower when using the baseline representation. For complex questions where the participants needed an overall understanding of the spatiotemporal structure of the data set, the space time cube representation resulted in on average twice as fast response times with no difference in error rates compared to the baseline. These results provide an empirical foundation for the hypothesis that space time cube representation benefits users analyzing complex spatiotemporal patterns.

SELECTION OF CITATIONS
SEARCH DETAIL
...