Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 11 de 11
Filter
Add more filters










Publication year range
1.
Katharine Sherratt; Hugo Gruson; Rok Grah; Helen Johnson; Rene Niehus; Bastian Prasse; Frank Sandman; Jannik Deuschel; Daniel Wolffram; Sam Abbott; Alexander Ullrich; Graham Gibson; Evan L Ray; Nicholas G Reich; Daniel Sheldon; Yijin Wang; Nutcha Wattanachit; Lijing Wang; Jan Trnka; Guillaume Obozinski; Tao Sun; Dorina Thanou; Loic Pottier; Ekaterina Krymova; Maria Vittoria Barbarossa; Neele Leithauser; Jan Mohring; Johanna Schneider; Jaroslaw Wlazlo; Jan Fuhrmann; Berit Lange; Isti Rodiah; Prasith Baccam; Heidi Gurung; Steven Stage; Bradley Suchoski; Jozef Budzinski; Robert Walraven; Inmaculada Villanueva; Vit Tucek; Martin Smid; Milan Zajicek; Cesar Perez Alvarez; Borja Reina; Nikos I Bosse; Sophie Meakin; Pierfrancesco Alaimo Di Loro; Antonello Maruotti; Veronika Eclerova; Andrea Kraus; David Kraus; Lenka Pribylova; Bertsimas Dimitris; Michael Lingzhi Li; Soni Saksham; Jonas Dehning; Sebastian Mohr; Viola Priesemann; Grzegorz Redlarski; Benjamin Bejar; Giovanni Ardenghi; Nicola Parolini; Giovanni Ziarelli; Wolfgang Bock; Stefan Heyder; Thomas Hotz; David E. Singh; Miguel Guzman-Merino; Jose L Aznarte; David Morina; Sergio Alonso; Enric Alvarez; Daniel Lopez; Clara Prats; Jan Pablo Burgard; Arne Rodloff; Tom Zimmermann; Alexander Kuhlmann; Janez Zibert; Fulvia Pennoni; Fabio Divino; Marti Catala; Gianfranco Lovison; Paolo Giudici; Barbara Tarantino; Francesco Bartolucci; Giovanna Jona Lasinio; Marco Mingione; Alessio Farcomeni; Ajitesh Srivastava; Pablo Montero-Manso; Aniruddha Adiga; Benjamin Hurt; Bryan Lewis; Madhav Marathe; Przemyslaw Porebski; Srinivasan Venkatramanan; Rafal Bartczuk; Filip Dreger; Anna Gambin; Krzysztof Gogolewski; Magdalena Gruziel-Slomka; Bartosz Krupa; Antoni Moszynski; Karol Niedzielewski; Jedrzej Nowosielski; Maciej Radwan; Franciszek Rakowski; Marcin Semeniuk; Ewa Szczurek; Jakub Zielinski; Jan Kisielewski; Barbara Pabjan; Kirsten Holger; Yuri Kheifetz; Markus Scholz; Marcin Bodych; Maciej Filinski; Radoslaw Idzikowski; Tyll Krueger; Tomasz Ozanski; Johannes Bracher; Sebastian Funk.
Preprint in English | medRxiv | ID: ppmedrxiv-22276024

ABSTRACT

BackgroundShort-term forecasts of infectious disease burden can contribute to situational awareness and aid capacity planning. Based on best practice in other fields and recent insights in infectious disease epidemiology, one can maximise the predictive performance of such forecasts if multiple models are combined into an ensemble. Here we report on the performance of ensembles in predicting COVID-19 cases and deaths across Europe between 08 March 2021 and 07 March 2022. MethodsWe used open-source tools to develop a public European COVID-19 Forecast Hub. We invited groups globally to contribute weekly forecasts for COVID-19 cases and deaths reported from a standardised source over the next one to four weeks. Teams submitted forecasts from March 2021 using standardised quantiles of the predictive distribution. Each week we created an ensemble forecast, where each predictive quantile was calculated as the equally-weighted average (initially the mean and then from 26th July the median) of all individual models predictive quantiles. We measured the performance of each model using the relative Weighted Interval Score (WIS), comparing models forecast accuracy relative to all other models. We retrospectively explored alternative methods for ensemble forecasts, including weighted averages based on models past predictive performance. ResultsOver 52 weeks we collected and combined up to 28 forecast models for 32 countries. We found a weekly ensemble had a consistently strong performance across countries over time. Across all horizons and locations, the ensemble performed better on relative WIS than 84% of participating models forecasts of incident cases (with a total N=862), and 92% of participating models forecasts of deaths (N=746). Across a one to four week time horizon, ensemble performance declined with longer forecast periods when forecasting cases, but remained stable over four weeks for incident death forecasts. In every forecast across 32 countries, the ensemble outperformed most contributing models when forecasting either cases or deaths, frequently outperforming all of its individual component models. Among several choices of ensemble methods we found that the most influential and best choice was to use a median average of models instead of using the mean, regardless of methods of weighting component forecast models. ConclusionsOur results support the use of combining forecasts from individual models into an ensemble in order to improve predictive performance across epidemiological targets and populations during infectious disease epidemics. Our findings further suggest that median ensemble methods yield better predictive performance more than ones based on means. Our findings also highlight that forecast consumers should place more weight on incident death forecasts than incident case forecasts at forecast horizons greater than two weeks. Code and data availabilityAll data and code are publicly available on Github: covid19-forecast-hub-europe/euro-hub-ensemble.

2.
Preprint in English | medRxiv | ID: ppmedrxiv-22273992

ABSTRACT

BackgroundInfectious disease modeling can serve as a powerful tool for science-based management of outbreaks, providing situational awareness and decision support for policy makers. Predictive modeling of an emerging disease is challenging due to limited knowledge on its epidemiological characteristics. For COVID-19, the prediction difficulty was further compounded by continuously changing policies, varying behavioral responses, poor availability and quality of crucial datasets, and the variable influence of different factors as the pandemic progresses. Due to these challenges, predictive modeling for COVID-19 has earned a mixed track record. MethodsWe provide a systematic review of prospective, data-driven modeling studies on population-level dynamics of COVID-19 in the US and conduct a quantitative assessment on crucial elements of modeling, with a focus on the aspects of modeling that are critical to make them useful for decision-makers. For each study, we documented the forecasting window, methodology, prediction target, datasets used, geographic resolution, whether they expressed quantitative uncertainty, the type of performance evaluation, and stated limitations. We present statistics for each category and discuss their distribution across the set of studies considered. We also address differences in these model features based on fields of study. FindingsOur initial search yielded 2,420 papers, of which 119 published papers and 17 preprints were included after screening. The most common datasets relied upon for COVID-19 modeling were counts of cases (93%) and deaths (62%), followed by mobility (26%), demographics (25%), hospitalizations (12%), and policy (12%). Our set of papers contained a roughly equal number of short-term (46%) and long-term (60%) predictions (defined as a prediction horizon longer than 4 weeks) and statistical (43%) versus compartmental (47%) methodologies. The target variables used were predominantly cases (89%), deaths (52%), hospitalizations (10%), and Rt (9%). We found that half of the papers in our analysis did not express quantitative uncertainty (50%). Among short-term prediction models, which can be fairly evaluated against truth data, 25% did not conduct any performance evaluation, and most papers were not evaluated over a timespan that includes varying epidemiological dynamics. The main categories of limitations stated by authors were disregarded factors (39%), data quality (28%), unknowable factors (26%), limitations specific to the methods used (22%), data availability (16%), and limited generalizability (8%). 36% of papers did not list any limitations in their discussion or conclusion section. InterpretationPublished COVID-19 models were found to be consistently lacking in some of the most important elements required for usability and translation, namely transparency, expressing uncertainty, performance evaluation, stating limitations, and communicating appropriate interpretations. Adopting the EPIFORGE 2020 guidelines would address these shortcomings and improve the consistency, reproducibility, comparability, and quality of epidemic forecasting reporting. We also discovered that most of the operational models that have been used in real-time to inform decision-making have not yet made it into the published literature, which highlights that the current publication system is not suited to the rapid information-sharing needs of outbreaks. Furthermore, data quality was identified to be one of the most important drivers of model performance, and a consistent limitation noted by the modeling community. The US public health infrastructure was not equipped to provide timely, high-quality COVID-19 data, which is required for effective modeling. Thus, a systematic infrastructure for improved data collection and sharing should be a major area of investment to support future pandemic preparedness.

3.
Preprint in English | medRxiv | ID: ppmedrxiv-22271905

ABSTRACT

BackgroundSARS-CoV-2 vaccination of persons aged 12 years and older has reduced disease burden in the United States. The COVID-19 Scenario Modeling Hub convened multiple modeling teams in September 2021 to project the impact of expanding vaccine administration to children 5-11 years old on anticipated COVID-19 burden and resilience against variant strains. MethodsNine modeling teams contributed state- and national-level projections for weekly counts of cases, hospitalizations, and deaths in the United States for the period September 12, 2021 to March 12, 2022. Four scenarios covered all combinations of: 1) presence vs. absence of vaccination of children ages 5-11 years starting on November 1, 2021; and 2) continued dominance of the Delta variant vs. emergence of a hypothetical more transmissible variant on November 15, 2021. Individual team projections were combined using linear pooling. The effect of childhood vaccination on overall and age-specific outcomes was estimated by meta-analysis approaches. FindingsAbsent a new variant, COVID-19 cases, hospitalizations, and deaths among all ages were projected to decrease nationally through mid-March 2022. Under a set of specific assumptions, models projected that vaccination of children 5-11 years old was associated with reductions in all-age cumulative cases (7.2%, mean incidence ratio [IR] 0.928, 95% confidence interval [CI] 0.880-0.977), hospitalizations (8.7%, mean IR 0.913, 95% CI 0.834-0.992), and deaths (9.2%, mean IR 0.908, 95% CI 0.797-1.020) compared with scenarios where children were not vaccinated. This projected effect of vaccinating children 5-11 years old increased in the presence of a more transmissible variant, assuming no change in vaccine effectiveness by variant. Larger relative reductions in cumulative cases, hospitalizations, and deaths were observed for children than for the entire U.S. population. Substantial state-level variation was projected in epidemic trajectories, vaccine benefits, and variant impacts. ConclusionsResults from this multi-model aggregation study suggest that, under a specific set of scenario assumptions, expanding vaccination to children 5-11 years old would provide measurable direct benefits to this age group and indirect benefits to the all-age U.S. population, including resilience to more transmissible variants.

4.
Preprint in English | medRxiv | ID: ppmedrxiv-21265886

ABSTRACT

Academic researchers, government agencies, industry groups, and individuals have produced forecasts at an unprecedented scale during the COVID-19 pandemic. To leverage these forecasts, the United States Centers for Disease Control and Prevention (CDC) partnered with an academic research lab at the University of Massachusetts Amherst to create the US COVID-19 Forecast Hub. Launched in April 2020, the Forecast Hub is a dataset with point and probabilistic forecasts of incident hospitalizations, incident cases, incident deaths, and cumulative deaths due to COVID-19 at national, state, and county levels in the United States. Included forecasts represent a variety of modeling approaches, data sources, and assumptions regarding the spread of COVID-19. The goal of this dataset is to establish a standardized and comparable set of short-term forecasts from modeling teams. These data can be used to develop ensemble models, communicate forecasts to the public, create visualizations, compare models, and inform policies regarding COVID-19 mitigation. These open-source data are available via download from GitHub, through an online API, and through R packages.

5.
Preprint in English | medRxiv | ID: ppmedrxiv-21262748

ABSTRACT

What is already known about this topic?The highly transmissible SARS-CoV-2 Delta variant has begun to cause increases in cases, hospitalizations, and deaths in parts of the United States. With slowed vaccination uptake, this novel variant is expected to increase the risk of pandemic resurgence in the US in July--December 2021. What is added by this report?Data from nine mechanistic models project substantial resurgences of COVID-19 across the US resulting from the more transmissible Delta variant. These resurgences, which have now been observed in most states, were projected to occur across most of the US, coinciding with school and business reopening. Reaching higher vaccine coverage in July--December 2021 reduces the size and duration of the projected resurgence substantially. The expected impact of the outbreak is largely concentrated in a subset of states with lower vaccination coverage. What are the implications for public health practice?Renewed efforts to increase vaccination uptake are critical to limiting transmission and disease, particularly in states with lower current vaccination coverage. Reaching higher vaccination goals in the coming months can potentially avert 1.5 million cases and 21,000 deaths and improve the ability to safely resume social contacts, and educational and business activities. Continued or renewed non-pharmaceutical interventions, including masking, can also help limit transmission, particularly as schools and businesses reopen.

6.
Estee Y Cramer; Evan L Ray; Velma K Lopez; Johannes Bracher; Andrea Brennen; Alvaro J Castro Rivadeneira; Aaron Gerding; Tilmann Gneiting; Katie H House; Yuxin Huang; Dasuni Jayawardena; Abdul H Kanji; Ayush Khandelwal; Khoa Le; Anja Muehlemann; Jarad Niemi; Apurv Shah; Ariane Stark; Yijin Wang; Nutcha Wattanachit; Martha W Zorn; Youyang Gu; Sansiddh Jain; Nayana Bannur; Ayush Deva; Mihir Kulkarni; Srujana Merugu; Alpan Raval; Siddhant Shingi; Avtansh Tiwari; Jerome White; Neil F Abernethy; Spencer Woody; Maytal Dahan; Spencer Fox; Kelly Gaither; Michael Lachmann; Lauren Ancel Meyers; James G Scott; Mauricio Tec; Ajitesh Srivastava; Glover E George; Jeffrey C Cegan; Ian D Dettwiller; William P England; Matthew W Farthing; Robert H Hunter; Brandon Lafferty; Igor Linkov; Michael L Mayo; Matthew D Parno; Michael A Rowland; Benjamin D Trump; Yanli Zhang-James; Samuel Chen; Stephen V Faraone; Jonathan Hess; Christopher P Morley; Asif Salekin; Dongliang Wang; Sabrina M Corsetti; Thomas M Baer; Marisa C Eisenberg; Karl Falb; Yitao Huang; Emily T Martin; Ella McCauley; Robert L Myers; Tom Schwarz; Daniel Sheldon; Graham Casey Gibson; Rose Yu; Liyao Gao; Yian Ma; Dongxia Wu; Xifeng Yan; Xiaoyong Jin; Yu-Xiang Wang; YangQuan Chen; Lihong Guo; Yanting Zhao; Quanquan Gu; Jinghui Chen; Lingxiao Wang; Pan Xu; Weitong Zhang; Difan Zou; Hannah Biegel; Joceline Lega; Steve McConnell; VP Nagraj; Stephanie L Guertin; Christopher Hulme-Lowe; Stephen D Turner; Yunfeng Shi; Xuegang Ban; Robert Walraven; Qi-Jun Hong; Stanley Kong; Axel van de Walle; James A Turtle; Michal Ben-Nun; Steven Riley; Pete Riley; Ugur Koyluoglu; David DesRoches; Pedro Forli; Bruce Hamory; Christina Kyriakides; Helen Leis; John Milliken; Michael Moloney; James Morgan; Ninad Nirgudkar; Gokce Ozcan; Noah Piwonka; Matt Ravi; Chris Schrader; Elizabeth Shakhnovich; Daniel Siegel; Ryan Spatz; Chris Stiefeling; Barrie Wilkinson; Alexander Wong; Sean Cavany; Guido Espana; Sean Moore; Rachel Oidtman; Alex Perkins; David Kraus; Andrea Kraus; Zhifeng Gao; Jiang Bian; Wei Cao; Juan Lavista Ferres; Chaozhuo Li; Tie-Yan Liu; Xing Xie; Shun Zhang; Shun Zheng; Alessandro Vespignani; Matteo Chinazzi; Jessica T Davis; Kunpeng Mu; Ana Pastore y Piontti; Xinyue Xiong; Andrew Zheng; Jackie Baek; Vivek Farias; Andreea Georgescu; Retsef Levi; Deeksha Sinha; Joshua Wilde; Georgia Perakis; Mohammed Amine Bennouna; David Nze-Ndong; Divya Singhvi; Ioannis Spantidakis; Leann Thayaparan; Asterios Tsiourvas; Arnab Sarker; Ali Jadbabaie; Devavrat Shah; Nicolas Della Penna; Leo A Celi; Saketh Sundar; Russ Wolfinger; Dave Osthus; Lauren Castro; Geoffrey Fairchild; Isaac Michaud; Dean Karlen; Matt Kinsey; Luke C. Mullany; Kaitlin Rainwater-Lovett; Lauren Shin; Katharine Tallaksen; Shelby Wilson; Elizabeth C Lee; Juan Dent; Kyra H Grantz; Alison L Hill; Joshua Kaminsky; Kathryn Kaminsky; Lindsay T Keegan; Stephen A Lauer; Joseph C Lemaitre; Justin Lessler; Hannah R Meredith; Javier Perez-Saez; Sam Shah; Claire P Smith; Shaun A Truelove; Josh Wills; Maximilian Marshall; Lauren Gardner; Kristen Nixon; John C. Burant; Lily Wang; Lei Gao; Zhiling Gu; Myungjin Kim; Xinyi Li; Guannan Wang; Yueying Wang; Shan Yu; Robert C Reiner; Ryan Barber; Emmanuela Gaikedu; Simon Hay; Steve Lim; Chris Murray; David Pigott; Heidi L Gurung; Prasith Baccam; Steven A Stage; Bradley T Suchoski; B. Aditya Prakash; Bijaya Adhikari; Jiaming Cui; Alexander Rodriguez; Anika Tabassum; Jiajia Xie; Pinar Keskinocak; John Asplund; Arden Baxter; Buse Eylul Oruc; Nicoleta Serban; Sercan O Arik; Mike Dusenberry; Arkady Epshteyn; Elli Kanal; Long T Le; Chun-Liang Li; Tomas Pfister; Dario Sava; Rajarishi Sinha; Thomas Tsai; Nate Yoder; Jinsung Yoon; Leyou Zhang; Sam Abbott; Nikos I Bosse; Sebastian Funk; Joel Hellewell; Sophie R Meakin; Katharine Sherratt; Mingyuan Zhou; Rahi Kalantari; Teresa K Yamana; Sen Pei; Jeffrey Shaman; Michael L Li; Dimitris Bertsimas; Omar Skali Lami; Saksham Soni; Hamza Tazi Bouardi; Turgay Ayer; Madeline Adee; Jagpreet Chhatwal; Ozden O Dalgic; Mary A Ladd; Benjamin P Linas; Peter Mueller; Jade Xiao; Yuanjia Wang; Qinxia Wang; Shanghong Xie; Donglin Zeng; Alden Green; Jacob Bien; Logan Brooks; Addison J Hu; Maria Jahja; Daniel McDonald; Balasubramanian Narasimhan; Collin Politsch; Samyak Rajanala; Aaron Rumack; Noah Simon; Ryan J Tibshirani; Rob Tibshirani; Valerie Ventura; Larry Wasserman; Eamon B O'Dea; John M Drake; Robert Pagano; Quoc T Tran; Lam Si Tung Ho; Huong Huynh; Jo W Walker; Rachel B Slayton; Michael A Johansson; Matthew Biggerstaff; Nicholas G Reich.
Preprint in English | medRxiv | ID: ppmedrxiv-21250974

ABSTRACT

Short-term probabilistic forecasts of the trajectory of the COVID-19 pandemic in the United States have served as a visible and important communication channel between the scientific modeling community and both the general public and decision-makers. Forecasting models provide specific, quantitative, and evaluable predictions that inform short-term decisions such as healthcare staffing needs, school closures, and allocation of medical supplies. Starting in April 2020, the US COVID-19 Forecast Hub (https://covid19forecasthub.org/) collected, disseminated, and synthesized tens of millions of specific predictions from more than 90 different academic, industry, and independent research groups. A multi-model ensemble forecast that combined predictions from dozens of different research groups every week provided the most consistently accurate probabilistic forecasts of incident deaths due to COVID-19 at the state and national level from April 2020 through October 2021. The performance of 27 individual models that submitted complete forecasts of COVID-19 deaths consistently throughout this year showed high variability in forecast skill across time, geospatial units, and forecast horizons. Two-thirds of the models evaluated showed better accuracy than a naive baseline model. Forecast accuracy degraded as models made predictions further into the future, with probabilistic error at a 20-week horizon 3-5 times larger than when predicting at a 1-week horizon. This project underscores the role that collaboration and active coordination between governmental public health agencies, academic modeling teams, and industry partners can play in developing modern modeling capabilities to support local, state, and federal response to outbreaks. Significance StatementThis paper compares the probabilistic accuracy of short-term forecasts of reported deaths due to COVID-19 during the first year and a half of the pandemic in the US. Results show high variation in accuracy between and within stand-alone models, and more consistent accuracy from an ensemble model that combined forecasts from all eligible models. This demonstrates that an ensemble model provided a reliable and comparatively accurate means of forecasting deaths during the COVID-19 pandemic that exceeded the performance of all of the models that contributed to it. This work strengthens the evidence base for synthesizing multiple models to support public health action.

7.
Preprint in English | medRxiv | ID: ppmedrxiv-20248736

ABSTRACT

AO_SCPLOWBSTRACTC_SCPLOWThe COVID-19 pandemic emerged in late December 2019. In the first six months of the global outbreak, the US reported more cases and deaths than any other country in the world. Effective modeling of the course of the pandemic can help assist with public health resource planning, intervention efforts, and vaccine clinical trials. However, building applied forecasting models presents unique challenges during a pandemic. First, case data available to models in real-time represent a non-stationary fraction of the true case incidence due to changes in available diagnostic tests and test-seeking behavior. Second, interventions varied across time and geography leading to large changes in transmissibility over the course of the pandemic. We propose a mechanistic Bayesian model (MechBayes) that builds upon the classic compartmental susceptible-exposed-infected-recovered (SEIR) model to operationalize COVID-19 forecasting in real time. This framework includes non-parametric modeling of varying transmission rates, non-parametric modeling of case and death discrepancies due to testing and reporting issues, and a joint observation likelihood on new case counts and new deaths; it is implemented in a probabilistic programming language to automate the use of Bayesian reasoning for quantifying uncertainty in probabilistic forecasts. The model has been used to submit forecasts to the US Centers for Disease Control, through the COVID-19 Forecast Hub. We examine the performance relative to a baseline model as well as alternate models submitted to the Forecast Hub. Additionally, we include an ablation test of our extensions to the classic SEIR model. We demonstrate a significant gain in both point and probabilistic forecast scoring measures using MechBayes when compared to a baseline model and show that MechBayes ranks as one of the top 2 models out of 10 submitted to the COVID-19 Forecast Hub. Finally, we demonstrate that MechBayes performs significantly better than the classical SEIR model.

8.
Preprint in English | medRxiv | ID: ppmedrxiv-20196725

ABSTRACT

During early stages of the COVID-19 pandemic, forecasts provided actionable information about disease transmission to public health decision-makers. Between February and May 2020, experts in infectious disease modeling made weekly predictions about the impact of the pandemic in the U.S. We aggregated these predictions into consensus predictions. In March and April 2020, experts predicted that the number of COVID-19 related deaths in the U.S. by the end of 2020 would be in the range of 150,000 to 250,000, with scenarios of near 1m deaths considered plausible. The wide range of possible future outcomes underscored the uncertainty surrounding the outbreak's trajectory. Experts' predictions of measurable short-term outcomes had varying levels of accuracy over the surveys but showed appropriate levels of uncertainty when aggregated. An expert consensus model can provide important insight early on in an emerging global catastrophe.

9.
Preprint in English | medRxiv | ID: ppmedrxiv-20177493

ABSTRACT

BackgroundThe COVID-19 pandemic has driven demand for forecasts to guide policy and planning. Previous research has suggested that combining forecasts from multiple models into a single "ensemble" forecast can increase the robustness of forecasts. Here we evaluate the real-time application of an open, collaborative ensemble to forecast deaths attributable to COVID-19 in the U.S. MethodsBeginning on April 13, 2020, we collected and combined one- to four-week ahead forecasts of cumulative deaths for U.S. jurisdictions in standardized, probabilistic formats to generate real-time, publicly available ensemble forecasts. We evaluated the point prediction accuracy and calibration of these forecasts compared to reported deaths. ResultsAnalysis of 2,512 ensemble forecasts made April 27 to July 20 with outcomes observed in the weeks ending May 23 through July 25, 2020 revealed precise short-term forecasts, with accuracy deteriorating at longer prediction horizons of up to four weeks. At all prediction horizons, the prediction intervals were well calibrated with 92-96% of observations falling within the rounded 95% prediction intervals. ConclusionsThis analysis demonstrates that real-time, publicly available ensemble forecasts issued in April-July 2020 provided robust short-term predictions of reported COVID-19 deaths in the United States. With the ongoing need for forecasts of impacts and resource needs for the COVID-19 response, the results underscore the importance of combining multiple probabilistic models and assessing forecast skill at different prediction horizons. Careful development, assessment, and communication of ensemble forecasts can provide reliable insight to public health decision makers.

10.
Preprint in English | medRxiv | ID: ppmedrxiv-20066431

ABSTRACT

BackgroundEfforts to track the severity and public health impact of the novel coronavirus, COVID-19, in the US have been hampered by testing issues, reporting lags, and inconsistency between states. Evaluating unexplained increases in deaths attributed to broad outcomes, such as pneumonia and influenza (P&I) or all causes, can provide a more complete and consistent picture of the burden caused by COVID-19. MethodsWe evaluated increases in the occurrence of deaths due to P&I above a seasonal baseline (adjusted for influenza activity) or due to any cause across the United States in February and March 2020. These estimates are compared with reported deaths due to COVID-19 and with testing data. ResultsThere were notable increases in the rate of death due to P&I in February and March 2020. In a number of states, these deaths pre-dated increases in COVID-19 testing rates and were not counted in official records as related to COVID-19. There was substantial variability between states in the discrepancy between reported rates of death due to COVID-19 and the estimated burden of excess deaths due to P&I. The increase in all-cause deaths in New York and New Jersey is 1.5-3 times higher than the official tally of COVID-19 confirmed deaths or the estimated excess death due to P&I. ConclusionsExcess P&I deaths provide a conservative estimate of COVID-19 burden and indicate that COVID-19-related deaths are missed in locations with inadequate testing or intense pandemic activity. RESEARCH IN CONTEXTO_ST_ABSEvidence before this studyC_ST_ABSDeaths due to the novel coronavirus, COVID-19, have been increasing sharply in the United States since mid-March. However, efforts to track the severity and public health impact of COIVD-19 in the US have been hampered by testing issues, reporting lags, and inconsistency between states. As a result, the reported number of deaths likely represents an underestimate of the true burden. Added Value of this studyWe evaluate increases in deaths due to pneumonia across the United States and relate these increases to the number of reported deaths due to COVID-19 in different states and evaluate the trajectories of these increases in relation to the volume of testing and to indicators of COVID-19 morbidity. This provides a more complete picture of mortality due to COVID-19 in the US and demonstrates how delays in testing led to many coronavirus deaths not being counted in certain states. Implications of all the available evidenceThe number of deaths reported to be due to COVID-19 represents just a fraction of the deaths linked to the pandemic. Monitoring trends in deaths due to pneumonia and all-causes provides a more complete picture of the tool of the disease.

11.
Preprint in English | medRxiv | ID: ppmedrxiv-20020016

ABSTRACT

A novel human coronavirus (2019-nCoV) was identified in China in December, 2019. There is limited support for many of its key epidemiologic features, including the incubation period, which has important implications for surveillance and control activities. Here, we use data from public reports of 101 confirmed cases in 38 provinces, regions, and countries outside of Wuhan (Hubei province, China) with identifiable exposure windows and known dates of symptom onset to estimate the incubation period of 2019-nCoV. We estimate the median incubation period of 2019-nCoV to be 5.2 days (95% CI: 4.4, 6.0), and 97.5% of those who develop symptoms will do so within 10.5 days (95% CI: 7.3, 15.3) of infection. These estimates imply that, under conservative assumptions, 64 out of every 10,000 cases will develop symptoms after 14 days of active monitoring or quarantine. Whether this risk is acceptable depends on the underlying risk of infection and consequences of missed cases. The estimates presented here can be used to inform policy in multiple contexts based on these judgments.

SELECTION OF CITATIONS
SEARCH DETAIL
...