Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 12 de 12
Filter
Add more filters










Publication year range
1.
Article in English | MEDLINE | ID: mdl-37200130

ABSTRACT

Physical walking is often considered the gold standard for VR travel whenever feasible. However, limited free-space walking areas in the real-world do not allow exploring larger-scale virtual environments by actual walking. Therefore, users often require handheld controllers for navigation, which can reduce believability, interfere with simultaneous interaction tasks, and exacerbate adverse effects such as motion sickness and disorientation. To investigate alternative locomotion options, we compared handheld Controller (thumbstick-based) and physical walking versus a seated (HeadJoystick) and standing/stepping (NaviBoard) leaning-based locomotion interface, where seated/standing users travel by moving their head toward the target direction. Rotations were always physically performed. To compare these interfaces, we designed a novel simultaneous locomotion and object interaction task, where users needed to keep touching the center of upward moving target balloons with their virtual lightsaber, while simultaneously staying inside a horizontally moving enclosure. Walking resulted in the best locomotion, interaction, and combined performances while the controller performed worst. Leaning-based interfaces improved user experience and performance compared to Controller, especially when standing/stepping using NaviBoard, but did not reach walking performance. That is, leaning-based interfaces HeadJoystick (sitting) and NaviBoard (standing) that provided additional physical self-motion cues compared to controller improved enjoyment, preference, spatial presence, vection intensity, motion sickness, as well as performance for locomotion, object interaction, and combined locomotion and object interaction. Our results also showed that less embodied interfaces (and in particular the controller) caused a more pronounced performance deterioration when increasing locomotion speed. Moreover, observed differences between our interfaces were not affected by repeated interface usage.

2.
IEEE Trans Vis Comput Graph ; 29(3): 1748-1768, 2023 03.
Article in English | MEDLINE | ID: mdl-34847032

ABSTRACT

Using standard handheld interfaces for VR locomotion may not provide a believable self-motion experience and can contribute to unwanted side effects such as motion sickness, disorientation, or increased cognitive load. This paper demonstrates how using a seated leaning-based locomotion interface -HeadJoystick- in VR ground-based navigation affects user experience, usability, and performance. In three within-subject studies, we compared controller (touchpad/thumbstick) with a more embodied interface ("HeadJoystick") where users moved their head and/or leaned in the direction of desired locomotion. In both conditions, users sat on a regular office chair and used it to control virtual rotations. In the first study, 24 participants used HeadJoystick versus Controller in three complementary tasks including reach-the-target, follow-the-path, and racing (dynamic obstacle avoidance). In the second study, 18 participants repeatedly used HeadJoystick versus Controller (8 one-minute trials each) in a reach-the-target task. To evaluate potential benefits of different brake mechanisms, in the third study 18 participants were asked to stop within each target area for one second. All three studies consistently showed advantages of HeadJoystick over Controller: we observed improved performance in all tasks, as well as higher user ratings for enjoyment, spatial presence, immersion, vection intensity, usability, ease of learning, ease of use, and rated potential for daily and long-term use, while reducing motion sickness and task load. Overall, our results suggest that leaning-based interfaces such as HeadJoystick provide an interesting and more embodied alternative to handheld interfaces in driving, reach-the-target, and follow-the-path tasks, and potentially a wider range of scenarios.


Subject(s)
Motion Sickness , Virtual Reality , Humans , Computer Graphics , Locomotion , Motion Sickness/prevention & control , User-Computer Interface
3.
IEEE Trans Vis Comput Graph ; 29(12): 5265-5281, 2023 Dec.
Article in English | MEDLINE | ID: mdl-36112551

ABSTRACT

Continuous locomotion in VR provides uninterrupted optical flow, which mimics real-world locomotion and supports path integration . However, optical flow limits the maximum speed and acceleration that can be effectively used without inducing cybersickness. In contrast, teleportation provides neither optical flow nor acceleration cues, and users can jump to any length without increasing cybersickness. However, teleportation cannot support continuous spatial updating and can increase disorientation. Thus, we designed 'HyperJump' in an attempt to merge benefits from continuous locomotion and teleportation. HyperJump adds iterative jumps every half a second on top of the continuous movement and was hypothesized to facilitate faster travel without compromising spatial awareness/orientation. In a user study, Participants travelled around a naturalistic virtual city with and without HyperJump (equivalent maximum speed). They followed waypoints to new landmarks, stopped near them and pointed back to all previously visited landmarks in random order. HyperJump was added to two continuous locomotion interfaces (controller- and leaning-based). Participants had better spatial awareness/orientation with leaning-based interfaces compared to controller-based (assessed via rapid pointing). With HyperJump, participants travelled significantly faster, while staying on the desired course without impairing their spatial knowledge. This provides evidence that optical flow can be effectively limited such that it facilitates faster travel without compromising spatial orientation. In future design iterations, we plan to utilize audio-visual effects to support jumping metaphors that help users better anticipate and interpret jumps, and use much larger virtual environments requiring faster speeds, where cybersickness will become increasingly prevalent and thus teleporting will become more important.

4.
IEEE Trans Vis Comput Graph ; 28(4): 1792-1809, 2022 04.
Article in English | MEDLINE | ID: mdl-32946395

ABSTRACT

Flying in virtual reality (VR) using standard handheld controllers can be cumbersome and contribute to unwanted side effects such as motion sickness and disorientation. This article investigates a novel hands-free flying interface-HeadJoystick, where the user moves their head similar to a joystick handle toward the target direction to control virtual translation velocity. The user sits on a regular office swivel chair and rotates it physically to control virtual rotation using 1:1 mapping. We evaluated short-term (Study 1) and extended usage effects through repeated usage (Study 2) of the HeadJoystick versus handheld interfaces in two within-subject studies, where participants flew through a sequence of increasingly difficult tunnels in the sky. Using the HeadJoystick instead of handheld interfaces improved both user experience and performance, in terms of accuracy, precision, ease of learning, ease of use, usability, long-term use, presence, immersion, sensation of self-motion, workload, and enjoyment in both studies. These findings demonstrate the benefits of using leaning-based interfaces for VR flying and potentially similar telepresence applications such as remote flight with quadcopter drones. From a theoretical perspective, we also show how leaning-based motion cueing interacts with full physical rotation to improve user experience and performance compared to the gamepad.


Subject(s)
Motion Sickness , Virtual Reality , Computer Graphics , Hand , Humans , Motion Sickness/prevention & control , User-Computer Interface
5.
IEEE Trans Vis Comput Graph ; 28(2): 1342-1362, 2022 02.
Article in English | MEDLINE | ID: mdl-34591771

ABSTRACT

Augmented reality applications allow users to enrich their real surroundings with additional digital content. However, due to the limited field of view of augmented reality devices, it can sometimes be difficult to become aware of newly emerging information inside or outside the field of view. Typical visual conflicts like clutter and occlusion of augmentations occur and can be further aggravated especially in the context of dense information spaces. In this article, we evaluate how multisensory cue combinations can improve the awareness for moving out-of-view objects in narrow field of view augmented reality displays. We distinguish between proximity and transition cues in either visual, auditory or tactile manner. Proximity cues are intended to enhance spatial awareness of approaching out-of-view objects while transition cues inform the user that the object just entered the field of view. In study 1, user preference was determined for 6 different cue combinations via forced-choice decisions. In study 2, the 3 most preferred modes were then evaluated with respect to performance and awareness measures in a divided attention reaction task. Both studies were conducted under varying noise levels. We show that on average the Visual-Tactile combination leads to 63% and Audio-Tactile to 65% faster reactions to incoming out-of-view augmentations than their Visual-Audio counterpart, indicating a high usefulness of tactile transition cues. We further show a detrimental effect of visual and audio noise on performance when feedback included visual proximity cues. Based on these results, we make recommendations to determine which cue combination is appropriate for which application.


Subject(s)
Augmented Reality , Cues , Computer Graphics , Touch , Visual Perception
6.
IEEE Trans Vis Comput Graph ; 27(1): 165-177, 2021 01.
Article in English | MEDLINE | ID: mdl-31443029

ABSTRACT

Walking has always been considered as the gold standard for navigation in Virtual Reality research. Though full rotation is no longer a technical challenge, physical translation is still restricted through limited tracked areas. While rotational information has been shown to be important, the benefit of the translational component is still unclear with mixed results in previous work. To address this gap, we conducted a mixed-method experiment to compare four levels of translational cues and control: none (using the trackpad of the HTC Vive controller to translate), upper-body leaning (sitting on a "NaviChair", leaning the upper-body to locomote), whole-body leaning/stepping (standing on a platform called NaviBoard, leaning the whole body or stepping one foot off the center to navigate), and full translation (physically walking). Results showed that translational cues and control had significant effects on various measures including task performance, task load, and simulator sickness. While participants performed significantly worse when they used a controller with no embodied translational cues, there was no significant difference between the NaviChair, NaviBoard, and actual walking. These results suggest that translational body-based motion cues and control from a low-cost leaning/stepping interface might provide enough sensory information for supporting spatial updating, spatial awareness, and efficient locomotion in VR, although future work will need to investigate how these results might or might not generalize to other tasks and scenarios.

7.
PLoS One ; 15(11): e0242078, 2020.
Article in English | MEDLINE | ID: mdl-33211736

ABSTRACT

Telepresence robots allow users to be spatially and socially present in remote environments. Yet, it can be challenging to remotely operate telepresence robots, especially in dense environments such as academic conferences or workplaces. In this paper, we primarily focus on the effect that a speed control method, which automatically slows the telepresence robot down when getting closer to obstacles, has on user behaviors. In our first user study, participants drove the robot through a static obstacle course with narrow sections. Results indicate that the automatic speed control method significantly decreases the number of collisions. For the second study we designed a more naturalistic, conference-like experimental environment with tasks that require social interaction, and collected subjective responses from the participants when they were asked to navigate through the environment. While about half of the participants preferred automatic speed control because it allowed for smoother and safer navigation, others did not want to be influenced by an automatic mechanism. Overall, the results suggest that automatic speed control simplifies the user interface for telepresence robots in static dense environments, but should be considered as optionally available, especially in situations involving social interactions.


Subject(s)
Robotics/instrumentation , Spatial Navigation , Algorithms , Cybernetics , Humans , User-Computer Interface
8.
IEEE Trans Vis Comput Graph ; 26(12): 3389-3401, 2020 12.
Article in English | MEDLINE | ID: mdl-32941150

ABSTRACT

Current augmented reality displays still have a very limited field of view compared to the human vision. In order to localize out-of-view objects, researchers have predominantly explored visual guidance approaches to visualize information in the limited (in-view) screen space. Unfortunately, visual conflicts like cluttering or occlusion of information often arise, which can lead to search performance issues and a decreased awareness about the physical environment. In this paper, we compare an innovative non-visual guidance approach based on audio-tactile cues with the state-of-the-art visual guidance technique EyeSee360 for localizing out-of-view objects in augmented reality displays with limited field of view. In our user study, we evaluate both guidance methods in terms of search performance and situation awareness. We show that although audio-tactile guidance is generally slower than the well-performing EyeSee360 in terms of search times, it is on a par regarding the hit rate. Even more so, the audio-tactile method provides a significant improvement in situation awareness compared to the visual approach.


Subject(s)
Augmented Reality , Cues , Virtual Reality , Adult , Awareness/physiology , Computer Graphics , Equipment Design , Female , Humans , Male , Middle Aged , Task Performance and Analysis , User-Computer Interface , Young Adult
9.
IEEE Trans Haptics ; 12(4): 483-496, 2019.
Article in English | MEDLINE | ID: mdl-30990440

ABSTRACT

Touchscreen interaction suffers from occlusion problems as fingers can cover small targets, which makes interacting with such targets challenging. To improve touchscreen interaction accuracy and consequently the selection of small or hidden objects, we introduce a back-of-device force feedback system for smartphones. We introduce a new solution that combines force feedback on the back to enhance touch input on the front screen. The interface includes three actuated pins at the back of a smartphone. All three pins are driven by microservos and can be actuated up to a frequency of 50 Hz and a maximum amplitude of 5 mm. In a first psychophysical user study, we explored the limits of the system. Thereafter, we demonstrate through a performance study that the proposed interface can enhance touchscreen interaction precision, compared to state-of-the-art methods. In particular, the selection of small targets performed remarkably well with force feedback. The study additionally shows that users subjectively felt significantly more accurate with force feedback. Based on the results, we discuss back-to-front feedback design issues and demonstrate potential applications through several prototypical concepts to illustrate where the back-of-device force feedback could be beneficial.


Subject(s)
Equipment Design , Feedback, Sensory/physiology , Smartphone , Touch Perception/physiology , User-Computer Interface , Adult , Female , Humans , Male , Psychophysics
10.
Front Robot AI ; 6: 128, 2019.
Article in English | MEDLINE | ID: mdl-33501143

ABSTRACT

Large display environments are highly suitable for immersive analytics. They provide enough space for effective co-located collaboration and allow users to immerse themselves in the data. To provide the best setting-in terms of visualization and interaction-for the collaborative analysis of a real-world task, we have to understand the group dynamics during the work on large displays. Among other things, we have to study, what effects different task conditions will have on user behavior. In this paper, we investigated the effects of task conditions on group behavior regarding collaborative coupling and territoriality during co-located collaboration on a wall-sized display. For that, we designed two tasks: a task that resembles the information foraging loop and a task that resembles the connecting facts activity. Both tasks represent essential sub-processes of the sensemaking process in visual analytics and cause distinct space/display usage conditions. The information foraging activity requires the user to work with individual data elements to look into details. Here, the users predominantly occupy only a small portion of the display. In contrast, the connecting facts activity requires the user to work with the entire information space. Therefore, the user has to overview the entire display. We observed 12 groups for an average of 2 h each and gathered qualitative data and quantitative data in the form of surveys, field notes, video recordings, tracking data, and system logs. During data analysis, we focused specifically on participants' collaborative coupling (in particular, collaboration tightness, coupling styles, user roles, and task subdivision strategies) and territorial behavior. Our results both confirm and extend findings from the previous tabletop and wall-sized display studies. We could detect that participants tended to subdivide the task to approach it, in their opinion, in a more effective way, in parallel. We describe the subdivision strategies for both task conditions. We also detected and described multiple user roles, as well as a new coupling style that does not fit in either category: loosely or tightly. Moreover, we could observe a territory type that has not been mentioned previously in research. In our opinion, this territory type can affect the collaboration process of groups with more than two collaborators negatively. Finally, we investigated critical display regions in terms of ergonomics. We could detect that users perceived some regions as less comfortable for long-time work. The findings can be valuable for groupware interface design and development of group behavior models for analytical reasoning and decision making.

11.
IEEE Trans Vis Comput Graph ; 25(9): 2821-2837, 2019 09.
Article in English | MEDLINE | ID: mdl-30004877

ABSTRACT

In Augmented Reality (AR), search performance for outdoor tasks is an important metric for evaluating the success of a large number of AR applications. Users must be able to find content quickly, labels and indicators must not be invasive but still clearly noticeable, and the user interface should maximize search performance in a variety of conditions. To address these issues, we have set up a series of experiments to test the influence of virtual characteristics such as color, size, and leader lines on the performance of search tasks and noticeability in both real and simulated environments. We evaluate two primary areas, including 1) the effects of peripheral field of view (FOV) limitations and labeling techniques on target acquisition during outdoor mobile search, and 2) the influence of local characteristics such as color, size, and motion on text labels over dynamic backgrounds. The first experiment showed that limited FOV will severely limit search performance, but that appropriate placement of labels and leaders within the periphery can alleviate this problem without interfering with walking or decreasing user comfort. In the second experiment, we found that different types of motion are more noticeable in optical versus video see-through displays, but that blue coloration is most noticeable in both. Results can aid in designing more effective view management techniques, especially for wider field of view displays.

12.
IEEE Trans Vis Comput Graph ; 18(4): 565-72, 2012 Apr.
Article in English | MEDLINE | ID: mdl-22402683

ABSTRACT

In this paper, we explore techniques that aim to improve site understanding for outdoor Augmented Reality (AR) applications. While the first person perspective in AR is a direct way of filtering and zooming on a portion of the data set, it severely narrows overview of the situation, particularly over large areas. We present two interactive techniques to overcome this problem: multi-view AR and variable perspective view. We describe in details the conceptual, visualization and interaction aspects of these techniques and their evaluation through a comparative user study. The results we have obtained strengthen the validity of our approach and the applicability of our methods to a large range of application domains.


Subject(s)
User-Computer Interface , Adult , Computer Graphics , Environment , Female , Humans , Male
SELECTION OF CITATIONS
SEARCH DETAIL
...