Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 14 de 14
Filtrar
1.
Proc Natl Acad Sci U S A ; 119(15): e2113561119, 2022 04 12.
Artigo em Inglês | MEDLINE | ID: mdl-35394862

RESUMO

Short-term probabilistic forecasts of the trajectory of the COVID-19 pandemic in the United States have served as a visible and important communication channel between the scientific modeling community and both the general public and decision-makers. Forecasting models provide specific, quantitative, and evaluable predictions that inform short-term decisions such as healthcare staffing needs, school closures, and allocation of medical supplies. Starting in April 2020, the US COVID-19 Forecast Hub (https://covid19forecasthub.org/) collected, disseminated, and synthesized tens of millions of specific predictions from more than 90 different academic, industry, and independent research groups. A multimodel ensemble forecast that combined predictions from dozens of groups every week provided the most consistently accurate probabilistic forecasts of incident deaths due to COVID-19 at the state and national level from April 2020 through October 2021. The performance of 27 individual models that submitted complete forecasts of COVID-19 deaths consistently throughout this year showed high variability in forecast skill across time, geospatial units, and forecast horizons. Two-thirds of the models evaluated showed better accuracy than a naïve baseline model. Forecast accuracy degraded as models made predictions further into the future, with probabilistic error at a 20-wk horizon three to five times larger than when predicting at a 1-wk horizon. This project underscores the role that collaboration and active coordination between governmental public-health agencies, academic modeling teams, and industry partners can play in developing modern modeling capabilities to support local, state, and federal response to outbreaks.


Assuntos
COVID-19 , COVID-19/mortalidade , Confiabilidade dos Dados , Previsões , Humanos , Pandemias , Probabilidade , Saúde Pública/tendências , Estados Unidos/epidemiologia
2.
Proc Natl Acad Sci U S A ; 119(7)2022 02 15.
Artigo em Inglês | MEDLINE | ID: mdl-35105729

RESUMO

Forecasting the burden of COVID-19 has been impeded by limitations in data, with case reporting biased by testing practices, death counts lagging far behind infections, and hospital census reflecting time-varying patient access, admission criteria, and demographics. Here, we show that hospital admissions coupled with mobility data can reliably predict severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) transmission rates and healthcare demand. Using a forecasting model that has guided mitigation policies in Austin, TX, we estimate that the local reproduction number had an initial 7-d average of 5.8 (95% credible interval [CrI]: 3.6 to 7.9) and reached a low of 0.65 (95% CrI: 0.52 to 0.77) after the summer 2020 surge. Estimated case detection rates ranged from 17.2% (95% CrI: 11.8 to 22.1%) at the outset to a high of 70% (95% CrI: 64 to 80%) in January 2021, and infection prevalence remained above 0.1% between April 2020 and March 1, 2021, peaking at 0.8% (0.7-0.9%) in early January 2021. As precautionary behaviors increased safety in public spaces, the relationship between mobility and transmission weakened. We estimate that mobility-associated transmission was 62% (95% CrI: 52 to 68%) lower in February 2021 compared to March 2020. In a retrospective comparison, the 95% CrIs of our 1, 2, and 3 wk ahead forecasts contained 93.6%, 89.9%, and 87.7% of reported data, respectively. Developed by a task force including scientists, public health officials, policy makers, and hospital executives, this model can reliably project COVID-19 healthcare needs in US cities.


Assuntos
COVID-19/epidemiologia , Hospitais , Pandemias , SARS-CoV-2 , Atenção à Saúde , Previsões , Hospitalização/estatística & dados numéricos , Humanos , Saúde Pública , Estudos Retrospectivos , Estados Unidos
3.
PLOS Digit Health ; 1(12): e0000166, 2022 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-36812627

RESUMO

Child birth via Cesarean section accounts for approximately 32% of all births each year in the United States. A variety of risk factors and complications can lead caregivers and patients to plan for a Cesarean delivery in advance before onset of labor. However, a non-trivial subset of Cesarean sections (∼25%) are unplanned and occur after an initial trial of labor is attempted. Unfortunately, patients who deliver via unplanned Cesarean sections have increased maternal morbidity and mortality rates and higher rates of neonatal intensive care admissions. In an effort to develop models aimed at improving health outcomes in labor and delivery, this work seeks to explore the use of national vital statistics data to quantify the likelihood of an unplanned Cesarean section based on 22 maternal characteristics. Machine learning techniques are used to ascertain influential features, train and evaluate models, and assess accuracy against available test data. Based on cross-validation results from a large training cohort (n = 6,530,467 births), the gradient-boosted tree algorithm was identified as the best performer and was evaluated on a large test cohort (n = 10,613,877 births) for two prediction scenarios. Area under the receiver operating characteristic curves of 0.77 or higher and recall scores of 0.78 or higher were obtained and the resulting models are well calibrated. Combined with feature importance analysis to explain why certain maternal characteristics lead to a specific prediction in individual patients, the developed analysis pipeline provides additional quantitative information to aid in the decision process on whether to plan for a Cesarean section in advance, a substantially safer option among women at a high risk of unplanned Cesarean delivery during labor.

4.
Am J Obstet Gynecol ; 224(1): 16-34, 2021 01.
Artigo em Inglês | MEDLINE | ID: mdl-32841628

RESUMO

Medicine is, in its essence, decision making under uncertainty; the decisions are made about tests to be performed and treatments to be administered. Traditionally, the uncertainty in decision making was handled using expertise collected by individual providers and, more recently, systematic appraisal of research in the form of evidence-based medicine. The traditional approach has been used successfully in medicine for a very long time. However, it has substantial limitations because of the complexity of the system of the human body and healthcare. The complex systems are a network of highly coupled components intensely interacting with each other. These interactions give those systems redundancy and thus robustness to failure and, at the same time, equifinality, that is, many different causative pathways leading to the same outcome. The equifinality of the complex systems of the human body and healthcare system demand the individualization of medical care, medicine, and medical decision making. Computational models excel in modeling complex systems and, consequently, enabling individualization of medical decision making and medicine. Computational models are theory- or knowledge-based models, data-driven models, or models that combine both approaches. Data are essential, although to a different degree, for computational models to successfully represent complex systems. The individualized decision making, made possible by the computational modeling of complex systems, has the potential to revolutionize the entire spectrum of medicine from individual patient care to policymaking. This approach allows applying tests and treatments to individuals who receive a net benefit from them, for whom benefits outweigh the risk, rather than treating all individuals in a population because, on average, the population benefits. Thus, the computational modeling-enabled individualization of medical decision making has the potential to both improve health outcomes and decrease the costs of healthcare.


Assuntos
Biologia Computacional , Ginecologia , Modelos Teóricos , Obstetrícia , Humanos
5.
Artigo em Inglês | MEDLINE | ID: mdl-30136971

RESUMO

Visualization and virtual environments (VEs) have been two interconnected parallel strands in visual computing for decades. Some VEs have been purposely developed for visualization applications, while many visualization applications are exemplary showcases in general-purpose VEs. Because of the development and operation costs of VEs, the majority of visualization applications in practice have yet to benefit from the capacity of VEs. In this paper, we examine this status quo from an information-theoretic perspective. Our objectives are to conduct cost-benefit analysis on typical VE systems (including augmented and mixed reality, theater-based systems, and large powerwalls), to explain why some visualization applications benefit more from VEs than others, and to sketch out pathways for the future development of visualization applications in VEs. We support our theoretical propositions and analysis using theories and discoveries in the literature of cognitive sciences and the practical evidence reported in the literatures of visualization and VEs.

6.
IEEE Comput Graph Appl ; 37(5): 106-112, 2017.
Artigo em Inglês | MEDLINE | ID: mdl-28945585

RESUMO

Visualization researchers, developers, practitioners, and educators routinely work across traditional discipline boundaries, oftentimes in teams of people that come from a diverse blend of backgrounds, using visualizations as a common language for collaboration. There is a looming global workforce shortage in the computational science and high-tech space, primarily due to a disconnect between population demographics and the demographics of those educated to fill these jobs. The visualization community is uniquely positioned to bring a fresh approach to making diversity and inclusion fundamental tenets that are necessary rather than desirable.

7.
IEEE Trans Vis Comput Graph ; 20(12): 1853-62, 2014 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-26356899

RESUMO

We present VASA, a visual analytics platform consisting of a desktop application, a component model, and a suite of distributed simulation components for modeling the impact of societal threats such as weather, food contamination, and traffic on critical infrastructure such as supply chains, road networks, and power grids. Each component encapsulates a high-fidelity simulation model that together form an asynchronous simulation pipeline: a system of systems of individual simulations with a common data and parameter exchange format. At the heart of VASA is the Workbench, a visual analytics application providing three distinct features: (1) low-fidelity approximations of the distributed simulation components using local simulation proxies to enable analysts to interactively configure a simulation run; (2) computational steering mechanisms to manage the execution of individual simulation components; and (3) spatiotemporal and interactive methods to explore the combined results of a simulation run. We showcase the utility of the platform using examples involving supply chains during a hurricane as well as food contamination in a fast food restaurant chain.


Assuntos
Gráficos por Computador , Informática/métodos , Medidas de Segurança , Software , Tempestades Ciclônicas , Planejamento em Desastres , Equipamentos e Provisões , Humanos , Modelos Teóricos , Centrais Elétricas , Meios de Transporte , Tempo (Meteorologia)
8.
IEEE Trans Vis Comput Graph ; 19(1): 94-107, 2013 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-22508900

RESUMO

Currently, user centered transfer function design begins with the user interacting with a one or two-dimensional histogram of the volumetric attribute space. The attribute space is visualized as a function of the number of voxels, allowing the user to explore the data in terms of the attribute size/magnitude. However, such visualizations provide the user with no information on the relationship between various attribute spaces (e.g., density, temperature, pressure, x, y, z) within the multivariate data. In this work, we propose a modification to the attribute space visualization in which the user is no longer presented with the magnitude of the attribute; instead, the user is presented with an information metric detailing the relationship between attributes of the multivariate volumetric data. In this way, the user can guide their exploration based on the relationship between the attribute magnitude and user selected attribute information as opposed to being constrained by only visualizing the magnitude of the attribute. We refer to this modification to the traditional histogram widget as an abstract attribute space representation. Our system utilizes common one and two-dimensional histogram widgets where the bins of the abstract attribute space now correspond to an attribute relationship in terms of the mean, standard deviation, entropy, or skewness. In this manner, we exploit the relationships and correlations present in the underlying data with respect to the dimension(s) under examination. These relationships are often times key to insight and allow us to guide attribute discovery as opposed to automatic extraction schemes which try to calculate and extract distinct attributes a priori. In this way, our system aids in the knowledge discovery of the interaction of properties within volumetric data.

9.
IEEE Trans Vis Comput Graph ; 18(10): 1731-43, 2012 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-22291153

RESUMO

We have developed an intuitive method to semiautomatically explore volumetric data in a focus-region-guided or value-driven way using a user-defined ray through the 3D volume and contour lines in the region of interest. After selecting a point of interest from a 2D perspective, which defines a ray through the 3D volume, our method provides analytical tools to assist in narrowing the region of interest to a desired set of features. Feature layers are identified in a 1D scalar value profile with the ray and are used to define default rendering parameters, such as color and opacity mappings, and locate the center of the region of interest. Contour lines are generated based on the feature layer level sets within interactively selected slices of the focus region. Finally, we utilize feature-preserving filters and demonstrate the applicability of our scheme to noisy data.


Assuntos
Algoritmos , Gráficos por Computador , Processamento de Imagem Assistida por Computador/métodos , Simulação por Computador , Diagnóstico por Imagem , Humanos , Tornados
10.
IEEE Trans Vis Comput Graph ; 18(3): 421-33, 2012 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-21383403

RESUMO

In many scientific simulations, the temporal variation and analysis of features are important. Visualization and visual analysis of time series data is still a significant challenge because of the large volume of data. Irregular and scattered time series data sets are even more problematic to visualize interactively. Previous work proposed functional representation using basis functions as one solution for interactively visualizing scattered data by harnessing the power of modern PC graphics boards. In this paper, we use the functional representation approach for time-varying data sets and develop an efficient encoding technique utilizing temporal similarity between time steps. Our system utilizes a graduated approach of three methods with increasing time complexity based on the lack of similarity of the evolving data sets. Using this system, we are able to enhance the encoding performance for the time-varying data sets, reduce the data storage by saving only changed or additional basis functions over time, and interactively visualize the time-varying encoding results. Moreover, we present efficient rendering of the functional representations using binary space partitioning tree textures to increase the rendering performance.

11.
IEEE Comput Graph Appl ; 32(4): 34-45, 2012.
Artigo em Inglês | MEDLINE | ID: mdl-24806631

RESUMO

Visualization and data analysis are crucial in analyzing and understanding a turbulent-flow simulation of size 4,096(3) cells per time slice (68 billion cells) and 17 time slices (one trillion total cells). The visualization techniques used help scientists investigate the dynamics of intense events individually and as these events form clusters.

12.
IEEE Trans Vis Comput Graph ; 14(6): 1420-7, 2008.
Artigo em Inglês | MEDLINE | ID: mdl-18988992

RESUMO

A stand-alone visualization application has been developed by a multi-disciplinary, collaborative team with the sole purpose of creating an interactive exploration environment allowing turbulent flow researchers to experiment and validate hypotheses using visualization. This system has specific optimizations made in data management, caching computations, and visualization allowing for the interactive exploration of datasets on the order of 1TB in size. Using this application, the user (co-author Calo) is able to interactively visualize and analyze all regions of a transitional flow volume, including the laminar, transitional and fully turbulent regions. The underlying goal of the visualizations produced from these transitional flow simulations is to localize turbulent spots in the laminar region of the boundary layer, determine under which conditions they form, and follow their evolution. The initiation of turbulent spots, which ultimately lead to full turbulence, was located via a proposed feature detection condition and verified by experimental results. The conditions under which these turbulent spots form and coalesce are validated and presented.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...