Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 7 de 7
Filter
Add more filters










Database
Language
Publication year range
1.
IEEE Trans Vis Comput Graph ; 29(12): 4858-4873, 2023 Dec.
Article in English | MEDLINE | ID: mdl-35857736

ABSTRACT

Immersive visualization in virtual reality (VR) allows us to exploit visual cues for perception in 3D space, yet few existing studies have measured the effects of visual cues. Across a desktop monitor and a head-mounted display (HMD), we assessed scatterplot designs which vary their use of visual cues-motion, shading, perspective (graphical projection), and dimensionality-on two sets of data. We conducted a user study with a summary task in which 32 participants estimated the classification accuracy of an artificial neural network from the scatterplots. With Bayesian multilevel modeling, we capture the intricate visual effects and find that no cue alone explains all the variance in estimation error. Visual motion cues generally reduce participants' estimation error; besides this motion, using other cues may increase participants' estimation error. Using an HMD, adding visual motion cues, providing a third data dimension, or showing a more complicated dataset leads to longer response times. We speculate that most visual cues may not strongly affect perception in immersive analytics unless they change people's mental model about data. In summary, by studying participants as they interpret the output from a complicated machine learning model, we advance our understanding of how to use the visual cues in immersive analytics.

2.
IEEE Trans Vis Comput Graph ; 28(9): 3219-3234, 2022 Sep.
Article in English | MEDLINE | ID: mdl-33587700

ABSTRACT

The dominant markup language for Web visualizations-Scalable Vector Graphics (SVG)-is comparatively easy to learn, and is open, accessible, customizable via CSS, and searchable via the DOM, with easy interaction handling and debugging. Because these attributes allow visualization creators to focus on design on implementation details, tools built on top of SVG, such as D3.js, are essential to the visualization community. However, slow SVG rendering can limit designs by effectively capping the number of on-screen data points, and this can force visualization creators to switch to Canvas or WebGL. These are less flexible (e.g., no search or styling via CSS), and harder to learn. We introduce Scalable Scalable Vector Graphics (SSVG) to reduce these limitations and allow complex and smooth visualizations to be created with SVG. SSVG automatically translates interactive SVG visualizations into a dynamic virtual DOM (VDOM) to bypass the browser's slow 'to specification' rendering by intercepting JavaScript function calls. De-coupling the SVG visualization specification from SVG rendering, and obtaining a dynamic VDOM, creates flexibility and opportunity for visualization system research. SSVG uses this flexibility to free up the main thread for more interactivity and renders the visualization with Canvas or WebGL on a web worker. Together, these concepts create a drop-in JavaScript library which can improve rendering performance by 3-9× with only one line of code added. To demonstrate applicability, we describe the use of SSVG on multiple example visualizations including published visualization research. A free copy of this article, collected data, and source code are available as open science at osf.io/ge8wp.

3.
IEEE Trans Vis Comput Graph ; 27(2): 347-357, 2021 Feb.
Article in English | MEDLINE | ID: mdl-33048696

ABSTRACT

Tools and interfaces are increasingly expected to be synchronous and distributed to accommodate remote collaboration. Yet, adoption of these techniques for data visualization is low partly because development is difficult: existing collaboration software systems either do not support simultaneous interaction or require expensive redevelopment of existing visualizations. We contribute VisConnect: a web-based synchronous distributed collaborative visualization system that supports most web-based SVG data visualizations, balances system safety with responsiveness, and supports simultaneous interaction from many collaborators. VisConnect works with existing visualization implementations with little-to-no code changes by synchronizing low-level JavaScript events across clients such that visualization updates proceed transparently across clients. This is accomplished via a peer-to-peer system that establishes consensus among clients on the per-element sequence of events, and uses a lock service to grant access over elements to clients. We contribute collaborative extensions of traditional visualization interaction techniques, such as drag, brush, and lasso, and discuss different strategies for collaborative visualization interactions. To demonstrate the utility of VisConnect, we present novel examples of collaborative visualizations in the healthcare domain, remote collaboration with annotation, and show in an education case study for e-learning with 22 participants that students found the ability to remotely collaborate on class activities helpful and enjoyable for understanding concepts. A free copy of this paper and source code are available on OSF at osf.io/ut7e6 and at visconnect.us.

4.
Article in English | MEDLINE | ID: mdl-33283211

ABSTRACT

Interest is growing rapidly in using deep learning to classify biomedical images, and interpreting these deep-learned models is necessary for life-critical decisions and scientific discovery. Effective interpretation techniques accelerate biomarker discovery and provide new insights into the etiology, diagnosis, and treatment of disease. Most interpretation techniques aim to discover spatially-salient regions within images, but few techniques consider imagery with multiple channels of information. For instance, highly multiplexed tumor and tissue images have 30-100 channels and require interpretation methods that work across many channels to provide deep molecular insights. We propose a novel channel embedding method that extracts features from each channel. We then use these features to train a classifier for prediction. Using this channel embedding, we apply an interpretation method to rank the most discriminative channels. To validate our approach, we conduct an ablation study on a synthetic dataset. Moreover, we demonstrate that our method aligns with biological findings on highly multiplexed images of breast cancer cells while outperforming baseline pipelines. Code is available at https://sabdelmagid.github.io/miccai2020-project/.

5.
Article in English | MEDLINE | ID: mdl-30136985

ABSTRACT

Convolutional neural networks can successfully perform many computer vision tasks on images. For visualization, how do CNNs perform when applied to graphical perception tasks? We investigate this question by reproducing Cleveland and McGill's seminal 1984 experiments, which measured human perception efficiency of different visual encodings and defined elementary perceptual tasks for visualization. We measure the graphical perceptual capabilities of four network architectures on five different visualization tasks and compare to existing and new human performance baselines. While under limited circumstances CNNs are able to meet or outperform human task performance, we find that CNNs are not currently a good model for human graphical perception. We present the results of these experiments to foster the understanding of how CNNs succeed and fail when applied to data visualizations.

6.
IEEE Trans Vis Comput Graph ; 23(1): 571-580, 2017 01.
Article in English | MEDLINE | ID: mdl-27875172

ABSTRACT

Information hierarchies are difficult to express when real-world space or time constraints force traversing the hierarchy in linear presentations, such as in educational books and classroom courses. We present booc.io, which allows linear and non-linear presentation and navigation of educational concepts and material. To support a breadth of material for each concept, booc.io is Web based, which allows adding material such as lecture slides, book chapters, videos, and LTIs. A visual interface assists the creation of the needed hierarchical structures. The goals of our system were formed in expert interviews, and we explain how our design meets these goals. We adapt a real-world course into booc.io, and perform introductory qualitative evaluation with students.

7.
IEEE Trans Pattern Anal Mach Intell ; 37(9): 1792-805, 2015 Sep.
Article in English | MEDLINE | ID: mdl-26353127

ABSTRACT

Improving the quality of degraded images is a key problem in image processing, but the breadth of the problem leads to domain-specific approaches for tasks such as super-resolution and compression artifact removal. Recent approaches have shown that a general approach is possible by learning application-specific models from examples; however, learning models sophisticated enough to generate high-quality images is computationally expensive, and so specific per-application or per-dataset models are impractical. To solve this problem, we present an efficient semi-local approximation scheme to large-scale Gaussian processes. This allows efficient learning of task-specific image enhancements from example images without reducing quality. As such, our algorithm can be easily customized to specific applications and datasets, and we show the efficiency and effectiveness of our approach across five domains: single-image super-resolution for scene, human face, and text images, and artifact removal in JPEG- and JPEG 2000-encoded images.

SELECTION OF CITATIONS
SEARCH DETAIL
...