Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 6 de 6
Filter
Add more filters










Database
Language
Publication year range
1.
Preprint in English | medRxiv | ID: ppmedrxiv-22283074

ABSTRACT

Colleges and universities in the US struggled to provide safe in-person education throughout the COVID-19 pandemic. Testing coupled with isolation is a nimble intervention strategy that can be tailored to mitigate health and economic costs, as the virus and our arsenal of medical countermeasures continue to evolve. We developed a decision-support tool to aid in the design of university-based testing strategies using a mathematical model of SARS-CoV-2 transmission. Applying this framework to a large public university reopening in the fall of 2021 with a 60% student vaccination rate, we find that the optimal strategy, in terms of health and economic costs, is twice weekly antigen testing of all students. This strategy provides a 95% guarantee that, throughout the fall semester, case counts would not exceed the CDCs original high transmission threshold of 100 cases per 100k persons over 7 days. As the virus and our medical armament continue to evolve, testing will remain a flexible tool for managing risks and keeping campuses open. We have implemented this model as an online tool to facilitate the design of testing strategies that adjust for COVID-19 conditions, university-specific parameters, and institutional goals. Author SummaryAs a part of the COVID-19 response team at a large public university in the US, we performed an analysis that considered together, the potential health and economic costs of different testing policies for the student body. University administrators had to weigh the up-front effort needed to implement wide scale testing against the potential costs of responding to high levels of disease on campus in the Fall of 2021, after vaccines were widely available but vaccination rates among college students were uncertain. The results presented here are applied to this specific instance, but the online tool provided can be tailored to university specific parameters, the epidemiological conditions, and the goals of the university. As we confront newly emerging variants of COVID-19 or novel pathogens, consideration of both the health and economic costs of proactive testing may serve as a politically tractable and cost-effective disease mitigation strategy.

2.
Preprint in English | medRxiv | ID: ppmedrxiv-22281855

ABSTRACT

COVID-19 has disproportionately impacted individuals depending on where they live and work, and based on their race, ethnicity, and socioeconomic status. Studies have documented catastrophic disparities at critical points throughout the pandemic, but have not yet systematically tracked their severity through time. Using anonymized hospitalization data from March 11, 2020 to June 1, 2021, we estimate the time-varying burden of COVID-19 by age group and ZIP code in Austin, Texas. During this 15-month period, we estimate an overall 16.9% (95% CrI: 16.1-17.8%) infection rate and 34.1% (95% CrI: 32.4-35.8%) case reporting rate. Individuals over 65 were less likely to be infected than younger age groups (8.0% [95% CrI: 7.5-8.6%] vs 18.1% [95% CrI: 17.2-19.2%]), but more likely to be hospitalized (1,381 per 100,000 vs 319 per 100,000) and have their infections reported (51% [95% CrI: 48-55%] vs 33% [95% CrI: 31-35%]). Children under 18, who make up 20.3% of the local population, accounted for only 5.5% (95% CrI: 3.8-7.7%) of all infections between March 1 and May 1, 2020 compared with 20.4% (95% CrI: 17.3-23.9%) between December 1, 2020 and February 1, 2021. We compared ZIP codes ranking in the 75th percentile of vulnerability to those in the 25th percentile, and found that the more vulnerable communities had 2.5 (95% CrI: 2.0-3.0) times the infection rate and only 70% (95% CrI: 61%-82%) the reporting rate compared to the less vulnerable communities. Inequality persisted but declined significantly over the 15-month study period. For example, the ratio in infection rates between the more and less vulnerable communities declined from 12.3 (95% CrI: 8.8-17.1) to 4.0 (95% CrI: 3.0-5.3) to 2.7 (95% CrI: 2.0-3.6), from April to August to December of 2020, respectively. Our results suggest that public health efforts to mitigate COVID-19 disparities were only partially effective and that the CDCs social vulnerability index may serve as a reliable predictor of risk on a local scale when surveillance data are limited.

3.
Preprint in English | medRxiv | ID: ppmedrxiv-21252541

ABSTRACT

Recent identification of the highly transmissible novel SARS-CoV-2 variant in the United Kingdom (B.1.1.7) has raised concerns for renewed pandemic surges worldwide 1,2. B.1.1.7 was first identified in the US on December 29, 2020 and may become dominant by March 2021 3. However, the regional prevalence of B.1.1.7 is largely unknown because of limited molecular surveillance for SARS-CoV-2 4. Quantitative PCR data from a surveillance testing program on a large university campus with roughly 30,000 students provides local situational awareness at a pivotal moment in the COVID-19 pandemic.

4.
Estee Y Cramer; Evan L Ray; Velma K Lopez; Johannes Bracher; Andrea Brennen; Alvaro J Castro Rivadeneira; Aaron Gerding; Tilmann Gneiting; Katie H House; Yuxin Huang; Dasuni Jayawardena; Abdul H Kanji; Ayush Khandelwal; Khoa Le; Anja Muehlemann; Jarad Niemi; Apurv Shah; Ariane Stark; Yijin Wang; Nutcha Wattanachit; Martha W Zorn; Youyang Gu; Sansiddh Jain; Nayana Bannur; Ayush Deva; Mihir Kulkarni; Srujana Merugu; Alpan Raval; Siddhant Shingi; Avtansh Tiwari; Jerome White; Neil F Abernethy; Spencer Woody; Maytal Dahan; Spencer Fox; Kelly Gaither; Michael Lachmann; Lauren Ancel Meyers; James G Scott; Mauricio Tec; Ajitesh Srivastava; Glover E George; Jeffrey C Cegan; Ian D Dettwiller; William P England; Matthew W Farthing; Robert H Hunter; Brandon Lafferty; Igor Linkov; Michael L Mayo; Matthew D Parno; Michael A Rowland; Benjamin D Trump; Yanli Zhang-James; Samuel Chen; Stephen V Faraone; Jonathan Hess; Christopher P Morley; Asif Salekin; Dongliang Wang; Sabrina M Corsetti; Thomas M Baer; Marisa C Eisenberg; Karl Falb; Yitao Huang; Emily T Martin; Ella McCauley; Robert L Myers; Tom Schwarz; Daniel Sheldon; Graham Casey Gibson; Rose Yu; Liyao Gao; Yian Ma; Dongxia Wu; Xifeng Yan; Xiaoyong Jin; Yu-Xiang Wang; YangQuan Chen; Lihong Guo; Yanting Zhao; Quanquan Gu; Jinghui Chen; Lingxiao Wang; Pan Xu; Weitong Zhang; Difan Zou; Hannah Biegel; Joceline Lega; Steve McConnell; VP Nagraj; Stephanie L Guertin; Christopher Hulme-Lowe; Stephen D Turner; Yunfeng Shi; Xuegang Ban; Robert Walraven; Qi-Jun Hong; Stanley Kong; Axel van de Walle; James A Turtle; Michal Ben-Nun; Steven Riley; Pete Riley; Ugur Koyluoglu; David DesRoches; Pedro Forli; Bruce Hamory; Christina Kyriakides; Helen Leis; John Milliken; Michael Moloney; James Morgan; Ninad Nirgudkar; Gokce Ozcan; Noah Piwonka; Matt Ravi; Chris Schrader; Elizabeth Shakhnovich; Daniel Siegel; Ryan Spatz; Chris Stiefeling; Barrie Wilkinson; Alexander Wong; Sean Cavany; Guido Espana; Sean Moore; Rachel Oidtman; Alex Perkins; David Kraus; Andrea Kraus; Zhifeng Gao; Jiang Bian; Wei Cao; Juan Lavista Ferres; Chaozhuo Li; Tie-Yan Liu; Xing Xie; Shun Zhang; Shun Zheng; Alessandro Vespignani; Matteo Chinazzi; Jessica T Davis; Kunpeng Mu; Ana Pastore y Piontti; Xinyue Xiong; Andrew Zheng; Jackie Baek; Vivek Farias; Andreea Georgescu; Retsef Levi; Deeksha Sinha; Joshua Wilde; Georgia Perakis; Mohammed Amine Bennouna; David Nze-Ndong; Divya Singhvi; Ioannis Spantidakis; Leann Thayaparan; Asterios Tsiourvas; Arnab Sarker; Ali Jadbabaie; Devavrat Shah; Nicolas Della Penna; Leo A Celi; Saketh Sundar; Russ Wolfinger; Dave Osthus; Lauren Castro; Geoffrey Fairchild; Isaac Michaud; Dean Karlen; Matt Kinsey; Luke C. Mullany; Kaitlin Rainwater-Lovett; Lauren Shin; Katharine Tallaksen; Shelby Wilson; Elizabeth C Lee; Juan Dent; Kyra H Grantz; Alison L Hill; Joshua Kaminsky; Kathryn Kaminsky; Lindsay T Keegan; Stephen A Lauer; Joseph C Lemaitre; Justin Lessler; Hannah R Meredith; Javier Perez-Saez; Sam Shah; Claire P Smith; Shaun A Truelove; Josh Wills; Maximilian Marshall; Lauren Gardner; Kristen Nixon; John C. Burant; Lily Wang; Lei Gao; Zhiling Gu; Myungjin Kim; Xinyi Li; Guannan Wang; Yueying Wang; Shan Yu; Robert C Reiner; Ryan Barber; Emmanuela Gaikedu; Simon Hay; Steve Lim; Chris Murray; David Pigott; Heidi L Gurung; Prasith Baccam; Steven A Stage; Bradley T Suchoski; B. Aditya Prakash; Bijaya Adhikari; Jiaming Cui; Alexander Rodriguez; Anika Tabassum; Jiajia Xie; Pinar Keskinocak; John Asplund; Arden Baxter; Buse Eylul Oruc; Nicoleta Serban; Sercan O Arik; Mike Dusenberry; Arkady Epshteyn; Elli Kanal; Long T Le; Chun-Liang Li; Tomas Pfister; Dario Sava; Rajarishi Sinha; Thomas Tsai; Nate Yoder; Jinsung Yoon; Leyou Zhang; Sam Abbott; Nikos I Bosse; Sebastian Funk; Joel Hellewell; Sophie R Meakin; Katharine Sherratt; Mingyuan Zhou; Rahi Kalantari; Teresa K Yamana; Sen Pei; Jeffrey Shaman; Michael L Li; Dimitris Bertsimas; Omar Skali Lami; Saksham Soni; Hamza Tazi Bouardi; Turgay Ayer; Madeline Adee; Jagpreet Chhatwal; Ozden O Dalgic; Mary A Ladd; Benjamin P Linas; Peter Mueller; Jade Xiao; Yuanjia Wang; Qinxia Wang; Shanghong Xie; Donglin Zeng; Alden Green; Jacob Bien; Logan Brooks; Addison J Hu; Maria Jahja; Daniel McDonald; Balasubramanian Narasimhan; Collin Politsch; Samyak Rajanala; Aaron Rumack; Noah Simon; Ryan J Tibshirani; Rob Tibshirani; Valerie Ventura; Larry Wasserman; Eamon B O'Dea; John M Drake; Robert Pagano; Quoc T Tran; Lam Si Tung Ho; Huong Huynh; Jo W Walker; Rachel B Slayton; Michael A Johansson; Matthew Biggerstaff; Nicholas G Reich.
Preprint in English | medRxiv | ID: ppmedrxiv-21250974

ABSTRACT

Short-term probabilistic forecasts of the trajectory of the COVID-19 pandemic in the United States have served as a visible and important communication channel between the scientific modeling community and both the general public and decision-makers. Forecasting models provide specific, quantitative, and evaluable predictions that inform short-term decisions such as healthcare staffing needs, school closures, and allocation of medical supplies. Starting in April 2020, the US COVID-19 Forecast Hub (https://covid19forecasthub.org/) collected, disseminated, and synthesized tens of millions of specific predictions from more than 90 different academic, industry, and independent research groups. A multi-model ensemble forecast that combined predictions from dozens of different research groups every week provided the most consistently accurate probabilistic forecasts of incident deaths due to COVID-19 at the state and national level from April 2020 through October 2021. The performance of 27 individual models that submitted complete forecasts of COVID-19 deaths consistently throughout this year showed high variability in forecast skill across time, geospatial units, and forecast horizons. Two-thirds of the models evaluated showed better accuracy than a naive baseline model. Forecast accuracy degraded as models made predictions further into the future, with probabilistic error at a 20-week horizon 3-5 times larger than when predicting at a 1-week horizon. This project underscores the role that collaboration and active coordination between governmental public health agencies, academic modeling teams, and industry partners can play in developing modern modeling capabilities to support local, state, and federal response to outbreaks. Significance StatementThis paper compares the probabilistic accuracy of short-term forecasts of reported deaths due to COVID-19 during the first year and a half of the pandemic in the US. Results show high variation in accuracy between and within stand-alone models, and more consistent accuracy from an ensemble model that combined forecasts from all eligible models. This demonstrates that an ensemble model provided a reliable and comparatively accurate means of forecasting deaths during the COVID-19 pandemic that exceeded the performance of all of the models that contributed to it. This work strengthens the evidence base for synthesizing multiple models to support public health action.

5.
Preprint in English | medRxiv | ID: ppmedrxiv-20177493

ABSTRACT

BackgroundThe COVID-19 pandemic has driven demand for forecasts to guide policy and planning. Previous research has suggested that combining forecasts from multiple models into a single "ensemble" forecast can increase the robustness of forecasts. Here we evaluate the real-time application of an open, collaborative ensemble to forecast deaths attributable to COVID-19 in the U.S. MethodsBeginning on April 13, 2020, we collected and combined one- to four-week ahead forecasts of cumulative deaths for U.S. jurisdictions in standardized, probabilistic formats to generate real-time, publicly available ensemble forecasts. We evaluated the point prediction accuracy and calibration of these forecasts compared to reported deaths. ResultsAnalysis of 2,512 ensemble forecasts made April 27 to July 20 with outcomes observed in the weeks ending May 23 through July 25, 2020 revealed precise short-term forecasts, with accuracy deteriorating at longer prediction horizons of up to four weeks. At all prediction horizons, the prediction intervals were well calibrated with 92-96% of observations falling within the rounded 95% prediction intervals. ConclusionsThis analysis demonstrates that real-time, publicly available ensemble forecasts issued in April-July 2020 provided robust short-term predictions of reported COVID-19 deaths in the United States. With the ongoing need for forecasts of impacts and resource needs for the COVID-19 response, the results underscore the importance of combining multiple probabilistic models and assessing forecast skill at different prediction horizons. Careful development, assessment, and communication of ensemble forecasts can provide reliable insight to public health decision makers.

6.
Preprint in English | medRxiv | ID: ppmedrxiv-20068163

ABSTRACT

We propose a Bayesian model for projecting first-wave COVID-19 deaths in all 50 U.S. states. Our models projections are based on data derived from mobile-phone GPS traces, which allows us to estimate how social-distancing behavior is "flattening the curve" in each state. In a two-week look-ahead test of out-of-sample forecasting accuracy, our model significantly outperforms the widely used model from the Institute for Health Metrics and Evaluation (IHME), achieving 42% lower prediction error: 13.2 deaths per day average error across all U.S. states, versus 22.8 deaths per day average error for the IHME model. Our model also provides an accurate, if slightly conservative, assessment of forecasting accuracy: in the same look-ahead test, 98% of data points fell within the models 95% credible intervals. Our models projections are updated daily at https://covid-19.tacc.utexas.edu/projections/.

SELECTION OF CITATIONS
SEARCH DETAIL
...