Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 11 de 11
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
IEEE Trans Vis Comput Graph ; 30(5): 2400-2410, 2024 May.
Artigo em Inglês | MEDLINE | ID: mdl-38437088

RESUMO

A prerequisite to improving the presence of a user in mixed reality (MR) is the ability to measure and quantify presence. Traditionally, subjective questionnaires have been used to assess the level of presence. However, recent studies have shown that presence is correlated with objective and systemic human performance measures such as reaction time. These studies analyze the correlation between presence and reaction time when technical factors such as object realism and plausibility of the object's behavior change. However, additional psychological and physiological human factors can also impact presence. It is unclear if presence can be mapped to and correlated with reaction time when human factors such as conditioning are involved. To answer this question, we conducted an exploratory study ($N=60$) where the relationship between presence and reaction time was assessed under three different conditioning scenarios: control, positive, and negative. We demonstrated that human factors impact presence. We found that presence scores and reaction times are significantly correlated (correlation coefficient of -0.64), suggesting that the impact of human factors on reaction time correlates with its effect on presence. In demonstrating that, our study takes another important step toward using objective and systemic measures like reaction time as a presence measure.


Assuntos
Realidade Aumentada , Humanos , Tempo de Reação , Gráficos por Computador , Inquéritos e Questionários
2.
Artigo em Inglês | MEDLINE | ID: mdl-37751337

RESUMO

Measuring presence is critical to improving user involvement and performance in Mixed Reality (MR). Presence, a crucial aspect of MR, is traditionally gauged using subjective questionnaires, leading to a lack of time-varying responses and susceptibility to user bias. Inspired by the existing literature on the relationship between presence and human performance, the proposed methodology systematically measures a user's reaction time to a visual stimulus as they interact within a manipulated MR environment. We explore the user reaction time as a quantity that can be easily measured using the systemic tools available in modern MR devices. We conducted an exploratory study (N=40) with two experiments designed to alter the users' sense of presence by manipulating place illusion and plausibility illusion. We found a significant correlation between presence scores and reaction times with a correlation coefficient -0.65, suggesting that users with a higher sense of presence responded more swiftly to stimuli. We develop a model that estimates a user's presence level using the reaction time values with high accuracy of up to 80%. While our study suggests that reaction time can be used as a measure of presence, further investigation is needed to improve the accuracy of the model.

3.
IEEE Trans Vis Comput Graph ; 27(5): 2608-2617, 2021 05.
Artigo em Inglês | MEDLINE | ID: mdl-33750710

RESUMO

Current avatar representations used in immersive VR applications lack features that may be important for supporting natural behaviors and effective communication among individuals. This study investigates the impact of the visual and nonverbal cues afforded by three different types of avatar representations in the context of several cooperative tasks. The avatar types we compared are No_Avatar (HMD and controllers only), Scanned_Avatar (wearing an HMD), and Heal_Avatar (video-see-through). The subjective and objective measures we used to assess the quality of interpersonal communication include surveys of social presence, interpersonal trust, communication satisfaction, and attention to behavioral cues, plus two behavioral measures: duration of mutual gaze and number of unique words spoken. We found that participants reported higher levels of trustworthiness in the Real_Avatar condition compared to the Scanned_Avatar and No_Avatar conditions. They also reported a greater level of attentional focus on facial expressions compared to the No_Avatar condition and spent more extended time, for some tasks, attempting to engage in mutual gaze behavior compared to the Scanned_Avatar and No_Avatar conditions. In both the Heal_Avatar and Scanned_Avatar conditions, participants reported higher levels of co-presence compared with the No_Avatar condition. In the Scanned_Avatar condition, compared with the Heal_Avatar and No_Avatar conditions, participants reported higher levels of attention to body posture. Overall, our exit survey revealed that a majority of participants (66.67%) reported a preference for the Real_Avatar, compared with 25.00% for the Scanned_Avatar and 8.33% for the No_Avatar, These findings provide novel insight into how a user's experience in a social VR scenario is affected by the type of avatar representation provided.


Assuntos
Gráficos por Computador , Relações Interpessoais , Meio Social , Realidade Virtual , Adolescente , Adulto , Comunicação , Sinais (Psicologia) , Expressão Facial , Feminino , Humanos , Masculino , Análise e Desempenho de Tarefas , Confiança/psicologia , Adulto Jovem
4.
Cogn Res Princ Implic ; 6(1): 21, 2021 03 24.
Artigo em Inglês | MEDLINE | ID: mdl-33761042

RESUMO

When a visual search target frequently appears in one target-rich region of space, participants learn to search there first, resulting in faster reaction time when the target appears there than when it appears elsewhere. Most research on this location probability learning (LPL) effect uses 2-dimensional (2D) search environments that are distinct from real-world search contexts, and the few studies on LPL in 3-dimensional (3D) contexts include complex visual cues or foraging tasks and therefore may not tap into the same habit-like learning mechanism as 2D LPL. The present study aimed to establish a baseline evaluation of LPL in controlled 3D search environments using virtual reality. The use of a virtual 3D search environment allowed us to compare LPL for information within a participant's initial field of view to LPL for information behind participants, outside of the initial field of view. Participants searched for a letter T on the ground among letter Ls in a large virtual space that was devoid of complex visual cues or landmarks. The T appeared in one target-rich quadrant of the floor space on half of the trials during the training phase. The target-rich quadrant appeared in front of half of the participants and behind the other half. LPL was considerably greater in the former condition than in the latter. This reveals an important constraint on LPL in real-world environments and indicates that consistent search patterns and consistent egocentric spatial coding are essential for this form of visual statistical learning in 3D environments.


Assuntos
Aprendizagem por Probabilidade , Realidade Virtual , Sinais (Psicologia) , Humanos , Tempo de Reação , Aprendizagem Espacial
5.
Front Robot AI ; 6: 44, 2019.
Artigo em Inglês | MEDLINE | ID: mdl-33501060

RESUMO

Architectural design drawings commonly include entourage elements: accessory objects, such as people, plants, furniture, etc., that can help to provide a sense of the scale of the depicted structure and "bring the drawings to life" by illustrating typical usage scenarios. In this paper, we describe two experiments that explore the extent to which adding a photo-realistic, three-dimensional model of a familiar person as an entourage element in a virtual architectural model might help to address the classical problem of distance underestimation in these environments. In our first experiment, we found no significant differences in participants' distance perception accuracy in a semi-realistic virtual hallway model in the presence of a static or animated figure of a familiar virtual human, compared to their perception of distances in a hallway model in which no virtual human appeared. In our second experiment, we found no significant differences in distance estimation accuracy in a virtual environment in the presence of a moderately larger-than-life or smaller-than-life virtual human entourage model than when a right-sized virtual human model was used. The results of these two experiments suggest that virtual human entourage has limited potential to influence peoples' sense of the scale of an indoor space, and that simply adding entourage, even including an exact-scale model of a familiar person, will not, on its own, directly evoke more accurate egocentric distance judgments in VR.

7.
IEEE Trans Vis Comput Graph ; 18(4): 538-45, 2012 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-22402680

RESUMO

Walking is the most natural form of locomotion for humans, and real walking interfaces have demonstrated their benefits for several navigation tasks. With recently proposed redirection techniques it becomes possible to overcome space limitations as imposed by tracking sensors or laboratory setups, and, theoretically, it is now possible to walk through arbitrarily large virtual environments. However, walking as sole locomotion technique has drawbacks, in particular, for long distances, such that even in the real world we tend to support walking with passive or active transportation for longer-distance travel. In this article we show that concepts from the field of redirected walking can be applied to movements with transportation devices. We conducted psychophysical experiments to determine perceptual detection thresholds for redirected driving, and set these in relation to results from redirected walking. We show that redirected walking-and-driving approaches can easily be realized in immersive virtual reality laboratories, e. g., with electric wheelchairs, and show that such systems can combine advantages of real walking in confined spaces with benefits of using vehicle-based self-motion for longer-distance travel.


Assuntos
Condução de Veículo , Interface Usuário-Computador , Caminhada , Gráficos por Computador , Humanos , Psicofísica , Percepção Espacial , Cadeiras de Rodas
8.
IEEE Trans Vis Comput Graph ; 13(6): 1270-7, 2007.
Artigo em Inglês | MEDLINE | ID: mdl-17968074

RESUMO

In many applications, it is important to understand the individual values of, and relationships between, multiple related scalar variables defined across a common domain. Several approaches have been proposed for representing data in these situations. In this paper we focus on strategies for the visualization of multivariate data that rely on color mixing. In particular, through a series of controlled observer experiments, we seek to establish a fundamental understanding of the information-carrying capacities of two alternative methods for encoding multivariate information using color: color blending and color weaving. We begin with a baseline experiment in which we assess participants' abilities to accurately read numerical data encoded in six different basic color scales defined in the L*a*b* color space. We then assess participants' abilities to read combinations of 2, 3, 4 and 6 different data values represented in a common region of the domain, encoded using either color blending or color weaving. In color blending a single mixed color is formed via linear combination of the individual values in L*a*b* space, and in color weaving the original individual colors are displayed side-by-side in a high frequency texture that fills the region. A third experiment was conducted to clarify some of the trends regarding the color contrast and its effect on the magnitude of the error that was observed in the second experiment. The results indicate that when the component colors are represented side-by-side in a high frequency texture, most participants' abilities to infer the values of individual components are significantly improved, relative to when the colors are blended. Participants' performance was significantly better with color weaving particularly when more than 2 colors were used, and even when the individual colors subtended only 3 minutes of visual angle in the texture. However, the information-carrying capacity of the color weaving approach has its limits. We found that participants' abilities to accurately interpret each of the individual components in a high frequency color texture typically falls off as the number of components increases from 4 to 6. We found no significant advantages, in either color blending or color weaving, to using color scales based on component hues thatare more widely separated in the L*a*b* color space. Furthermore, we found some indications that extra difficulties may arise when opponent hues are employed.


Assuntos
Cor , Gráficos por Computador , Sistemas de Gerenciamento de Base de Dados , Bases de Dados Factuais , Armazenamento e Recuperação da Informação/métodos , Modelos Teóricos , Interface Usuário-Computador , Simulação por Computador , Análise Multivariada
11.
IEEE Trans Vis Comput Graph ; 10(4): 471-83, 2004.
Artigo em Inglês | MEDLINE | ID: mdl-18579974

RESUMO

In this paper, we describe the results of two comprehensive controlled observer experiments intended to yield insight into the following question: If we could design the ideal texture pattern to apply to an arbitrary smoothly curving surface in order to enable its 3D shape to be most accurately and effectively perceived, what would the characteristics of that texture pattern be? We begin by reviewing the results of our initial study in this series, which were presented at the 2003 IEEE Symposium on Information Visualization, and offer an expanded analysis of those findings. We continue by presenting the results of a follow-on study in which we sought to more specifically investigate the separate and combined influences on shape perception of particular texture components, with the goal of obtaining a clearer view of their potential information carrying capacities. In each study, we investigated the observers' ability to identify the intrinsic shape category of a surface patch (elliptical, hyperbolic, cylindrical, or flat) and its extrinsic surface orientation (convex, concave, both, or neither). In our first study, we compared performance under eight different texture type conditions, plus two projection conditions (perspective or orthographic) and two viewing conditions (head-on or oblique). In this study, we found that: 1) Shape perception was better facilitated, in general, by the bidirectional "principal direction grid" pattern than by any of the seven other patterns tested; 2) shape type classification accuracy remained high under the orthographic projection condition for some texture types when the viewpoint was oblique; 3) perspective projection was required for accurate surface orientation classification; and 4) shape classification accuracy was higher when the surface patches were oriented at a (generic) oblique angle to the line of sight than when they were oriented (in a nongeneric pose) to face the viewpoint straight on. In our second study, we compared performance under eight new texture type conditions, redesigned to facilitate gathering insight into the cumulative effects of specific individual directional components in a wider variety of multidirectional texture patterns. In this follow-on study, we found that shape classification accuracy was equivalently good under a variety of test patterns that included components following either the first or first and second principal directions, in addition to other directions, suggesting that a principal direction grid texture is not the only possible "best option" for enhancing shape representation.


Assuntos
Tomada de Decisões/fisiologia , Percepção de Forma/fisiologia , Reconhecimento Visual de Modelos/fisiologia , Análise e Desempenho de Tarefas , Humanos
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...