Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 7 de 7
Filter
Add more filters










Database
Language
Publication year range
1.
IEEE Trans Vis Comput Graph ; 23(1): 301-310, 2017 01.
Article in English | MEDLINE | ID: mdl-27875146

ABSTRACT

The analysis of eye tracking data often requires the annotation of areas of interest (AOIs) to derive semantic interpretations of human viewing behavior during experiments. This annotation is typically the most time-consuming step of the analysis process. Especially for data from wearable eye tracking glasses, every independently recorded video has to be annotated individually and corresponding AOIs between videos have to be identified. We provide a novel visual analytics approach to ease this annotation process by image-based, automatic clustering of eye tracking data integrated in an interactive labeling and analysis system. The annotation and analysis are tightly coupled by multiple linked views that allow for a direct interpretation of the labeled data in the context of the recorded video stimuli. The components of our analytics environment were developed with a user-centered design approach in close cooperation with an eye tracking expert. We demonstrate our approach with eye tracking data from a real experiment and compare it to an analysis of the data by manual annotation of dynamic AOIs. Furthermore, we conducted an expert user study with 6 external eye tracking researchers to collect feedback and identify analysis strategies they used while working with our application.


Subject(s)
Computer Graphics , Eye Movements/physiology , Image Processing, Computer-Assisted/methods , Video Recording , Algorithms , Humans
2.
IEEE Trans Vis Comput Graph ; 23(1): 421-430, 2017 01.
Article in English | MEDLINE | ID: mdl-27875158

ABSTRACT

Visual search can be time-consuming, especially if the scene contains a large number of possibly relevant objects. An instance of this problem is present when using geographic or schematic maps with many different elements representing cities, streets, sights, and the like. Unless the map is well-known to the reader, the full map or at least large parts of it must be scanned to find the elements of interest. In this paper, we present a controlled eye-tracking study (30 participants) to compare four variants of map annotation with labels: within-image annotations, grid reference annotation, directional annotation, and miniature annotation. Within-image annotation places labels directly within the map without any further search support. Grid reference annotation corresponds to the traditional approach known from atlases. Directional annotation utilizes a label in combination with an arrow pointing in the direction of the label within the map. Miniature annotation shows a miniature grid to guide the reader to the area of the map in which the label is located. The study results show that within-image annotation is outperformed by all other annotation approaches. Best task completion times are achieved with miniature annotation. The analysis of eye-movement data reveals that participants applied significantly different visual task solution strategies for the different visual annotations.

3.
IEEE Trans Vis Comput Graph ; 22(1): 1005-14, 2016 Jan.
Article in English | MEDLINE | ID: mdl-26529744

ABSTRACT

We present a new visualization approach for displaying eye tracking data from multiple participants. We aim to show the spatio-temporal data of the gaze points in the context of the underlying image or video stimulus without occlusion. Our technique, denoted as gaze stripes, does not require the explicit definition of areas of interest but directly uses the image data around the gaze points, similar to thumbnails for images. A gaze stripe consists of a sequence of such gaze point images, oriented along a horizontal timeline. By displaying multiple aligned gaze stripes, it is possible to analyze and compare the viewing behavior of the participants over time. Since the analysis is carried out directly on the image data, expensive post-processing or manual annotation are not required. Therefore, not only patterns and outliers in the participants' scanpaths can be detected, but the context of the stimulus is available as well. Furthermore, our approach is especially well suited for dynamic stimuli due to the non-aggregated temporal mapping. Complementary views, i.e., markers, notes, screenshots, histograms, and results from automatic clustering, can be added to the visualization to display analysis results. We illustrate the usefulness of our technique on static and dynamic stimuli. Furthermore, we discuss the limitations and scalability of our approach in comparison to established visualization techniques.


Subject(s)
Computer Graphics , Eye Movement Measurements , Image Processing, Computer-Assisted/methods , Eye Movements/physiology , Humans , Spatio-Temporal Analysis
4.
IEEE Trans Vis Comput Graph ; 20(11): 1590-603, 2014 Nov.
Article in English | MEDLINE | ID: mdl-26355337

ABSTRACT

We present a visual representation for dynamic, weighted graphs based on the concept of adjacency lists. Two orthogonal axes are used: one for all nodes of the displayed graph, the other for the corresponding links. Colors and labels are employed to identify the nodes. The usage of color allows us to scale the visualization to single pixel level for large graphs. In contrast to other techniques, we employ an asymmetric mapping that results in an aligned and compact representation of links. Our approach is independent of the specific properties of the graph to be visualized, but certain graphs and tasks benefit from the asymmetry. As we show in our results, the strength of our technique is the visualization of dynamic graphs. In particular, sparse graphs benefit from the compact representation. Furthermore, our approach uses visual encoding by size to represent weights and therefore allows easy quantification and comparison. We evaluate our approach in a quantitative user study that confirms the suitability for dynamic and weighted graphs. Finally, we demonstrate our approach for two examples of dynamic graphs.

5.
IEEE Trans Vis Comput Graph ; 17(12): 1949-58, 2011 Dec.
Article in English | MEDLINE | ID: mdl-22034312

ABSTRACT

A new type of glyph is introduced to visualize unsteady flow with static images, allowing easier analysis of time-dependent phenomena compared to animated visualization. Adopting the visual metaphor of radar displays, this glyph represents flow directions by angles and time by radius in spherical coordinates. Dense seeding of flow radar glyphs on the flow domain naturally lends itself to multi-scale visualization: zoomed-out views show aggregated overviews, zooming-in enables detailed analysis of spatial and temporal characteristics. Uncertainty visualization is supported by extending the glyph to display possible ranges of flow directions. The paper focuses on 2D flow, but includes a discussion of 3D flow as well. Examples from CFD and the field of stochastic hydrogeology show that it is easy to discriminate regions of different spatiotemporal flow behavior and regions of different uncertainty variations in space and time. The examples also demonstrate that parameter studies can be analyzed because the glyph design facilitates comparative visualization. Finally, different variants of interactive GPU-accelerated implementations are discussed.

6.
IEEE Trans Vis Comput Graph ; 17(8): 1148-63, 2011 Aug.
Article in English | MEDLINE | ID: mdl-21659680

ABSTRACT

This paper presents an acceleration scheme for the numerical computation of sets of trajectories in vector fields or iterated solutions in maps, possibly with simultaneous evaluation of quantities along the curves such as integrals or extrema. It addresses cases with a dense evaluation on the domain, where straightforward approaches are subject to redundant calculations. These are avoided by first calculating short solutions for the whole domain. From these, longer solutions are then constructed in a hierarchical manner until the designated length is achieved. While the computational complexity of the straightforward approach depends linearly on the length of the solutions, the computational cost with the proposed scheme grows only logarithmically with increasing length. Due to independence of subtasks and memory locality, our algorithm is suitable for parallel execution on many-core architectures like GPUs. The trade-offs of the method--lower accuracy and increased memory consumption--are analyzed, including error order as well as numerical error for discrete computation grids. The usefulness and flexibility of the scheme are demonstrated with two example applications: line integral convolution and the computation of the finite-time Lyapunov exponent. Finally, results and performance measurements of our GPU implementation are presented for both synthetic and simulated vector fields from computational fluid dynamics.

7.
IEEE Trans Vis Comput Graph ; 17(6): 781-94, 2011 Jun.
Article in English | MEDLINE | ID: mdl-20733229

ABSTRACT

This paper generalizes the concept of Lagrangian coherent structures, which is known for its potential to visualize coherent regions in vector fields and to distinguish them from each other. In particular, we extend the concept of the flow map to generic mappings of coordinates. As the major application of this generalization, we present a semiglobal method for visualizing coherent structures in symmetric second order tensor fields. We demonstrate the usefulness by examples from DT-MRI, uncovering anatomical structures in linearly anisotropic regions not amenable to local feature criteria. To further exemplify the suitability of our concept, we also present its application to stress tensor fields. Last, an accelerated implementation utilizing GPUs is presented.

SELECTION OF CITATIONS
SEARCH DETAIL
...