Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 8 de 8
Filter
Add more filters










Database
Language
Publication year range
1.
IEEE Trans Vis Comput Graph ; 30(1): 1336-1346, 2024 Jan.
Article in English | MEDLINE | ID: mdl-37878456

ABSTRACT

Situated visualizations are a type of visualization where data is presented next to its physical referent (i.e., the physical object, space, or person it refers to), often using augmented-reality displays. While situated visualizations can be beneficial in various contexts and have received research attention, they are typically designed with the assumption that the physical referent is visible. However, in practice, a physical referent may be obscured by another object, such as a wall, or may be outside the user's visual field. In this paper, we propose a conceptual framework and a design space to help researchers and user interface designers handle non-visible referents in situated visualizations. We first provide an overview of techniques proposed in the past for dealing with non-visible objects in the areas of 3D user interfaces, 3D visualization, and mixed reality. From this overview, we derive a design space that applies to situated visualizations and employ it to examine various trade-offs, challenges, and opportunities for future research in this area.

2.
PLoS One ; 18(2): e0282281, 2023.
Article in English | MEDLINE | ID: mdl-36821640

ABSTRACT

[This corrects the article DOI: 10.1371/journal.pone.0143962.].

3.
IEEE Trans Vis Comput Graph ; 28(12): 5071-5090, 2022 12.
Article in English | MEDLINE | ID: mdl-34310309

ABSTRACT

Virtual self-avatars have been increasingly used in Augmented Reality (AR) where one can see virtual content embedded into physical space. However, little is known about the perception of self-avatars in such a context. The possibility that their embodiment could be achieved in a similar way as in Virtual Reality opens the door to numerous applications in education, communication, entertainment, or the medical field. This article aims to review the literature covering the embodiment of virtual self-avatars in AR. Our goal is (i) to guide readers through the different options and challenges linked to the implementation of AR embodiment systems, (ii) to provide a better understanding of AR embodiment perception by classifying the existing knowledge, and (iii) to offer insight on future research topics and trends for AR and avatar research. To do so, we introduce a taxonomy of virtual embodiment experiences by defining a "body avatarization" continuum. The presented knowledge suggests that the sense of embodiment evolves in the same way in AR as in other settings, but this possibility has yet to be fully investigated. We suggest that, whilst it is yet to be well understood, the embodiment of avatars has a promising future in AR and conclude by discussing possible directions for research.


Subject(s)
Augmented Reality , Virtual Reality , User-Computer Interface , Computer Graphics , Surveys and Questionnaires
4.
Comput Intell Neurosci ; 2016: 2758103, 2016.
Article in English | MEDLINE | ID: mdl-26819580

ABSTRACT

With stereoscopic displays a sensation of depth that is too strong could impede visual comfort and may result in fatigue or pain. We used Electroencephalography (EEG) to develop a novel brain-computer interface that monitors users' states in order to reduce visual strain. We present the first system that discriminates comfortable conditions from uncomfortable ones during stereoscopic vision using EEG. In particular, we show that either changes in event-related potentials' (ERPs) amplitudes or changes in EEG oscillations power following stereoscopic objects presentation can be used to estimate visual comfort. Our system reacts within 1 s to depth variations, achieving 63% accuracy on average (up to 76%) and 74% on average when 7 consecutive variations are measured (up to 93%). Performances are stable (≈62.5%) when a simplified signal processing is used to simulate online analyses or when the number of EEG channels is lessened. This study could lead to adaptive systems that automatically suit stereoscopic displays to users and viewing conditions. For example, it could be possible to match the stereoscopic effect with users' state by modifying the overlap of left and right images according to the classifier output.


Subject(s)
Brain Mapping , Brain Waves/physiology , Brain/physiology , Depth Perception/physiology , Electroencephalography , Brain-Computer Interfaces , Computer Simulation , Female , Humans , Male , Monte Carlo Method , Photic Stimulation , Statistics, Nonparametric , Surveys and Questionnaires , User-Computer Interface , Young Adult
5.
IEEE Comput Graph Appl ; 36(5): 82-87, 2016.
Article in English | MEDLINE | ID: mdl-28113150

ABSTRACT

The power of interactive 3D graphics, immersive displays, and spatial interfaces is still under-explored in domains where the main target is to enhance creativity and emotional experiences. This article presents a set of work the attempts to extent the frontiers of music creation as well as the experience of audiences attending to digital performances. The goal is to connect sounds to interactive 3D graphics that musicians can interact with and the audience can observe.


Subject(s)
Emotions , Creativity , Humans , Imaging, Three-Dimensional , Music
6.
PLoS One ; 10(12): e0143962, 2015.
Article in English | MEDLINE | ID: mdl-26625261

ABSTRACT

Mental-Imagery based Brain-Computer Interfaces (MI-BCIs) allow their users to send commands to a computer using their brain-activity alone (typically measured by ElectroEncephaloGraphy-EEG), which is processed while they perform specific mental tasks. While very promising, MI-BCIs remain barely used outside laboratories because of the difficulty encountered by users to control them. Indeed, although some users obtain good control performances after training, a substantial proportion remains unable to reliably control an MI-BCI. This huge variability in user-performance led the community to look for predictors of MI-BCI control ability. However, these predictors were only explored for motor-imagery based BCIs, and mostly for a single training session per subject. In this study, 18 participants were instructed to learn to control an EEG-based MI-BCI by performing 3 MI-tasks, 2 of which were non-motor tasks, across 6 training sessions, on 6 different days. Relationships between the participants' BCI control performances and their personality, cognitive profile and neurophysiological markers were explored. While no relevant relationships with neurophysiological markers were found, strong correlations between MI-BCI performances and mental-rotation scores (reflecting spatial abilities) were revealed. Also, a predictive model of MI-BCI performance based on psychometric questionnaire scores was proposed. A leave-one-subject-out cross validation process revealed the stability and reliability of this model: it enabled to predict participants' performance with a mean error of less than 3 points. This study determined how users' profiles impact their MI-BCI control ability and thus clears the way for designing novel MI-BCI training protocols, adapted to the profile of each user.


Subject(s)
Brain-Computer Interfaces/psychology , Cognition/physiology , Electroencephalography/psychology , Personality/physiology , Adult , Female , Humans , Imagery, Psychotherapy/methods , Learning/physiology , Male , Neurophysiology/methods , Personality Disorders/physiopathology , Reproducibility of Results , Young Adult
7.
IEEE Comput Graph Appl ; 33(2): 80-5, 2013.
Article in English | MEDLINE | ID: mdl-24807943

ABSTRACT

A museum exhibition on the Lascaux caves provides the opportunity to experiment with touch-based interfaces manipulating 3D virtual objects. The researchers targeted three tasks: observing rare objects, reassembling object fragments, and reproducing artwork.


Subject(s)
Art , Public Sector , Touch , User-Computer Interface , Computer Graphics
8.
IEEE Comput Graph Appl ; 28(4): 58-62, 2008.
Article in English | MEDLINE | ID: mdl-18663815

ABSTRACT

To increase the input space of mobile devices, the authors developed a proof-of-concept 3D elastic controller that easily adapts to mobile devices. This embedded device improves the completion of high-level interaction tasks such as visualization of large documents and navigation in 3D environments. It also opens new directions for tomorrow's mobile applications.


Subject(s)
Cell Phone/instrumentation , Computer Graphics/instrumentation , Computer Peripherals , Imaging, Three-Dimensional/instrumentation , Imaging, Three-Dimensional/methods , Internet , Software , User-Computer Interface , Equipment Design , Equipment Failure Analysis , Travel
SELECTION OF CITATIONS
SEARCH DETAIL
...