Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 11 de 11
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
IEEE Trans Vis Comput Graph ; 29(9): 3788-3798, 2023 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-35486551

RESUMO

The visualization of results while the simulation is running is increasingly common in extreme scale computing environments. We present a novel approach for in situ generation of image databases to achieve cost savings on supercomputers. Our approach, a hybrid between traditional inline and in transit techniques, dynamically distributes visualization tasks between simulation nodes and visualization nodes, using probing as a basis to estimate rendering cost. Our hybrid design differs from previous works in that it creates opportunities to minimize idle time from four fundamental types of inefficiency: variability, limited scalability, overhead, and rightsizing. We demonstrate our results by comparing our method against both inline and in transit methods for a variety of configurations, including two simulation codes and a scaling study that goes above 19 K cores. Our findings show that our approach is superior in many configurations. As in situ visualization becomes increasingly ubiquitous, we believe our technique could lead to significant amounts of reclaimed cycles on supercomputers.

2.
IEEE Comput Graph Appl ; 39(6): 76-85, 2019.
Artigo em Inglês | MEDLINE | ID: mdl-31714213

RESUMO

In situ visualization is an increasingly important approach for computational science, as it can address limitations on leading edge high-performance computers and also can provide an increased spatio-temporal resolution. However, there are many open research issues with effective in situ processing. This article describes the challenges identified by a recent Dagstuhl Seminar on the topic.

3.
IEEE Trans Vis Comput Graph ; 25(7): 2349-2361, 2019 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-29994004

RESUMO

We present an algorithm for parallel volume rendering that is a hybrid between classical object order and image order techniques. The algorithm operates on unstructured grids (and structured ones), and thus can deal with block boundaries interleaving in complex ways. It also deals effectively with cases that are prone to load imbalance, i.e., cases where cell sizes differ dramatically, either because of the nature of the input data, or because of the effects of the camera transformation. The algorithm divides work over resources such that each phase of its processing is bounded in the amount of computation it can perform. We demonstrate its efficacy through a series of studies, varying over camera position, data set size, transfer function, image size, and processor count. At its biggest, our experiments scaled up to 8,192 processors and operated on data sets with more than one billion cells. In total, we find that our hybrid algorithm performs well in all cases. This is because our algorithm naturally adapts its computation based on workload, and can operate like either an object order technique or an image order technique in scenarios where those techniques are efficient.

4.
IEEE Comput Graph Appl ; 36(3): 48-58, 2016.
Artigo em Inglês | MEDLINE | ID: mdl-28113158

RESUMO

One of the most critical challenges for high-performance computing (HPC) scientific visualization is execution on massively threaded processors. Of the many fundamental changes we are seeing in HPC systems, one of the most profound is a reliance on new processor types optimized for execution bandwidth over latency hiding. Our current production scientific visualization software is not designed for these new types of architectures. To address this issue, the VTK-m framework serves as a container for algorithms, provides flexible data representation, and simplifies the design of visualization algorithms on new and future computer architecture.

5.
IEEE Trans Vis Comput Graph ; 20(6): 893-906, 2014 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-26357306

RESUMO

This paper extends and evaluates a family of dynamic ray scheduling algorithms that can be performed in-situ on large distributed memory parallel computers. The key idea is to consider both ray state and data accesses when scheduling ray computations. We compare three instances of this family of algorithms against two traditional statically scheduled schemes. We show that our dynamic scheduling approach can render data sets that are larger than aggregate system memory and that cannot be rendered by existing statically scheduled ray tracers. For smaller problems that fit in aggregate memory but are larger than typical shared memory, our dynamic approach is competitive with the best static scheduling algorithm.

6.
IEEE Trans Vis Comput Graph ; 19(12): 2703-12, 2013 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-24051837

RESUMO

Numerical ensemble forecasting is a powerful tool that drives many risk analysis efforts and decision making tasks. These ensembles are composed of individual simulations that each uniquely model a possible outcome for a common event of interest: e.g., the direction and force of a hurricane, or the path of travel and mortality rate of a pandemic. This paper presents a new visual strategy to help quantify and characterize a numerical ensemble's predictive uncertainty: i.e., the ability for ensemble constituents to accurately and consistently predict an event of interest based on ground truth observations. Our strategy employs a Bayesian framework to first construct a statistical aggregate from the ensemble. We extend the information obtained from the aggregate with a visualization strategy that characterizes predictive uncertainty at two levels: at a global level, which assesses the ensemble as a whole, as well as a local level, which examines each of the ensemble's constituents. Through this approach, modelers are able to better assess the predictive strengths and weaknesses of the ensemble as a whole, as well as individual models. We apply our method to two datasets to demonstrate its broad applicability.


Assuntos
Algoritmos , Teorema de Bayes , Gráficos por Computador , Interpretação Estatística de Dados , Modelos Estatísticos , Reconhecimento Automatizado de Padrão/métodos , Interface Usuário-Computador , Simulação por Computador , Reprodutibilidade dos Testes , Sensibilidade e Especificidade
7.
IEEE Comput Graph Appl ; 32(4): 34-45, 2012.
Artigo em Inglês | MEDLINE | ID: mdl-24806631

RESUMO

Visualization and data analysis are crucial in analyzing and understanding a turbulent-flow simulation of size 4,096(3) cells per time slice (68 billion cells) and 17 time slices (one trillion total cells). The visualization techniques used help scientists investigate the dynamics of intense events individually and as these events form clusters.

8.
IEEE Trans Vis Comput Graph ; 17(11): 1702-13, 2011 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-21885895

RESUMO

Streamline computation in a very large vector field data set represents a significant challenge due to the nonlocal and data-dependent nature of streamline integration. In this paper, we conduct a study of the performance characteristics of hybrid parallel programming and execution as applied to streamline integration on a large, multicore platform. With multicore processors now prevalent in clusters and supercomputers, there is a need to understand the impact of these hybrid systems in order to make the best implementation choice. We use two MPI-based distribution approaches based on established parallelization paradigms, parallelize over seeds and parallelize over blocks, and present a novel MPI-hybrid algorithm for each approach to compute streamlines. Our findings indicate that the work sharing between cores in the proposed MPI-hybrid parallel implementation results in much improved performance and consumes less communication and I/O bandwidth than a traditional, nonhybrid distributed implementation.

9.
IEEE Comput Graph Appl ; 30(3): 32-44, 2010.
Artigo em Inglês | MEDLINE | ID: mdl-20650716

RESUMO

A hybrid parallel and out-of-core algorithm pads blocks from a structured grid with layers of ghost data from adjacent blocks. This enables end-to-end streaming computations on very large data sets that gracefully adapt to available computing resources, from a single-processor machine to parallel visualization clusters.

10.
IEEE Comput Graph Appl ; 30(3): 22-31, 2010.
Artigo em Inglês | MEDLINE | ID: mdl-20650715

RESUMO

This article presents the results of experiments studying how the pure-parallelism paradigm scales to massive data sets, including 16,000 or more cores on trillion-cell meshes, the largest data sets published to date in the visualization literature. The findings on scaling characteristics and bottlenecks contribute to understanding how pure parallelism will perform in the future.

11.
Procedia Comput Sci ; 1(1): 1757-1764, 2010 May.
Artigo em Inglês | MEDLINE | ID: mdl-23762211

RESUMO

Knowledge discovery from large and complex scientific data is a challenging task. With the ability to measure and simulate more processes at increasingly finer spatial and temporal scales, the growing number of data dimensions and data objects presents tremendous challenges for effective data analysis and data exploration methods and tools. The combination and close integration of methods from scientific visualization, information visualization, automated data analysis, and other enabling technologies -such as efficient data management- supports knowledge discovery from multi-dimensional scientific data. This paper surveys two distinct applications in developmental biology and accelerator physics, illustrating the effectiveness of the described approach.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...