Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 4 de 4
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
Artigo em Inglês | MEDLINE | ID: mdl-38546995

RESUMO

This research investigated how the similarity of the rendering parameters of background and foreground objects affected egocentric depth perception in indoor virtual and augmented environments. We refer to the similarity of the rendering parameters as visual 'congruence'. Study participants manipulated the depth of a sphere to match the depth of a designated target peg. In the first experiment, the sphere and peg were both virtual, while in the second experiment, the sphere is virtual and the peg is real. In both experiments, depth perception accuracy was found to depend on the levels of realism and congruence between the sphere, pegs, and background. In Experiment 1, realistic backgrounds lead to overestimation of depth, but resulted in underestimation when the background was virtual, and when depth cues were applied to the sphere and target peg. In Experiment 2, background and target pegs were real but matched with the virtual sphere; in comparison to Experiment 1, realistically rendered targets prompted an underestimation and more accuracy with the manipulated object. These findings suggest that congruence can affect distance estimation and the underestimation effect in the AR environment resulted from increased graphical fidelity of the foreground target and background.

2.
IEEE Trans Vis Comput Graph ; 24(3): 1331-1344, 2018 03.
Artigo em Inglês | MEDLINE | ID: mdl-28212088

RESUMO

Multi-scale virtual environments contain geometric details ranging over several orders of magnitude and typically employ out-of-core rendering techniques. When displayed in virtual reality systems this entails using a 7 degree-of-freedom (DOF) view model where view scale is a separate 7th DOF in addition to 6DOF view pose. Dynamic adjustment of this and other view parameters become very important to usability. In this paper, we evaluate how two adjustment techniques interact with uni- and bi-manual 7DOF navigation in DesktopVR and a CAVE. The travel task has two stages, an initial targeted zoom and a detailed geometric inspection. The results show benefits of the auto-adjustments on completion time and stereo fusion issues, but only in certain circumstances. Peculiar view configuration examples show the difficulty of creating robust adjustment rules.

3.
Faraday Discuss ; 169: 195-207, 2014.
Artigo em Inglês | MEDLINE | ID: mdl-25340471

RESUMO

RiboVision is a visualization and analysis tool for the simultaneous display of multiple layers of diverse information on primary (1D), secondary (2D), and three-dimensional (3D) structures of ribosomes. The ribosome is a macromolecular complex containing ribosomal RNA and ribosomal proteins and is a key component of life responsible for the synthesis of proteins in all living organisms. RiboVision is intended for rapid retrieval, analysis, filtering, and display of a variety of ribosomal data. Preloaded information includes 1D, 2D, and 3D structures augmented by base-pairing, base-stacking, and other molecular interactions. RiboVision is preloaded with rRNA secondary structures, rRNA domains and helical structures, phylogeny, crystallographic thermal factors, etc. RiboVision contains structures of ribosomal proteins and a database of their molecular interactions with rRNA. RiboVision contains preloaded structures and data for two bacterial ribosomes (Thermus thermophilus and Escherichia coli), one archaeal ribosome (Haloarcula marismortui), and three eukaryotic ribosomes (Saccharomyces cerevisiae, Drosophila melanogaster, and Homo sapiens). RiboVision revealed several major discrepancies between the 2D and 3D structures of the rRNAs of the small and large subunits (SSU and LSU). Revised structures mapped with a variety of data are available in RiboVision as well as in a public gallery (). RiboVision is designed to allow users to distill complex data quickly and to easily generate publication-quality images of data mapped onto secondary structures. Users can readily import and analyze their own data in the context of other work. This package allows users to import and map data from CSV files directly onto 1D, 2D, and 3D levels of structure. RiboVision has features in rough analogy with web-based map services capable of seamlessly switching the type of data displayed and the resolution or magnification of the display. RiboVision is available at .


Assuntos
RNA Ribossômico/química , Proteínas Ribossômicas/química , Ribossomos/química , Conformação de Ácido Nucleico , Software
4.
IEEE Trans Vis Comput Graph ; 14(6): 1165-72, 2008.
Artigo em Inglês | MEDLINE | ID: mdl-18988960

RESUMO

Traditional geospatial information visualizations often present views that restrict the user to a single perspective. When zoomed out, local trends and anomalies become suppressed and lost; when zoomed in for local inspection, spatial awareness and comparison between regions become limited. In our model, coordinated visualizations are integrated within individual probe interfaces, which depict the local data in user-defined regions-of-interest. Our probe concept can be incorporated into a variety of geospatial visualizations to empower users with the ability to observe, coordinate, and compare data across multiple local regions. It is especially useful when dealing with complex simulations or analyses where behavior in various localities differs from other localities and from the system as a whole. We illustrate the effectiveness of our technique over traditional interfaces by incorporating it within three existing geospatial visualization systems: an agent-based social simulation, a census data exploration tool, and an 3D GIS environment for analyzing urban change over time. In each case, the probe-based interaction enhances spatial awareness, improves inspection and comparison capabilities, expands the range of scopes, and facilitates collaboration among multiple users.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...