Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 4 de 4
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
Sarah Wulf Hanson; Cristiana Abbafati; Joachim G Aerts; Ziyad Al-Aly; Charlie Ashbaugh; Tala Ballouz; Oleg Blyuss; Polina Bobkova; Gouke Bonsel; Svetlana Borzakova; Danilo Buonsenso; Denis Butnaru; Austin Carter; Helen Chu; Cristina De Rose; Mohamed Mustafa Diab; Emil Ekbom; Maha El Tantawi; Victor Fomin; Robert Frithiof; Aysylu Gamirova; Petr V Glybochko; Juanita A. Haagsma; Shaghayegh Haghjooy Javanmard; Erin B Hamilton; Gabrielle Harris; Majanka H Heijenbrok-Kal; Raimund Helbok; Merel E Hellemons; David Hillus; Susanne M Huijts; Michael Hultstrom; Waasila Jassat; Florian Kurth; Ing-Marie Larsson; Miklos Lipcsey; Chelsea Liu; Callan D Loflin; Andrei Malinovschi; Wenhui Mao; Lyudmila Mazankova; Denise McCulloch; Dominik Menges; Noushin Mohammadifard; Daniel Munblit; Nikita A Nekliudov; Osondu Ogbuoji; Ismail M Osmanov; Jose L. Penalvo; Maria Skaalum Petersen; Milo A Puhan; Mujibur Rahman; Verena Rass; Nickolas Reinig; Gerard M Ribbers; Antonia Ricchiuto; Sten Rubertsson; Elmira Samitova; Nizal Sarrafzadegan; Anastasia Shikhaleva; Kyle E Simpson; Dario Sinatti; Joan B Soriano; Ekaterina Spiridonova; Fridolin Steinbeis; Andrey A Svistunov; Piero Valentini; Brittney J van de Water; Rita van den Berg-Emons; Ewa Wallin; Martin Witzenrath; Yifan Wu; Hanzhang Xu; Thomas Zoller; Christopher Adolph; James Albright; Joanne O Amlag; Aleksandr Y Aravkin; Bree L Bang-Jensen; Catherine Bisignano; Rachel Castellano; Emma Castro; Suman Chakrabarti; James K Collins; Xiaochen Dai; Farah Daoud; Carolyn Dapper; Amanda Deen; Bruce B Duncan; Megan Erickson; Samuel B Ewald; Alize J Ferrari; Abraham D. Flaxman; Nancy Fullman; Amiran Gamkrelidze; John R Giles; Gaorui Guo; Simon I Hay; Jiawei He; Monika Helak; Erin N Hulland; Maia Kereselidze; Kris J Krohn; Alice Lazzar-Atwood; Akiaja Lindstrom; Rafael Lozano; Beatrice Magistro; Deborah Carvalho Malta; Johan Mansson; Ana M Mantilla Herrera; Ali H Mokdad; Lorenzo Monasta; Shuhei Nomura; Maja Pasovic; David M Pigott; Robert C Reiner Jr.; Grace Reinke; Antonio Luiz P Ribeiro; Damian Francesco Santomauro; Aleksei Sholokhov; Emma Elizabeth Spurlock; Rebecca Walcott; Ally Walker; Charles Shey Wiysonge; Peng Zheng; Janet Prvu Bettger; Christopher JL Murray; Theo Vos.
Preprint em Inglês | medRxiv | ID: ppmedrxiv-22275532

RESUMO

ImportanceWhile much of the attention on the COVID-19 pandemic was directed at the daily counts of cases and those with serious disease overwhelming health services, increasingly, reports have appeared of people who experience debilitating symptoms after the initial infection. This is popularly known as long COVID. ObjectiveTo estimate by country and territory of the number of patients affected by long COVID in 2020 and 2021, the severity of their symptoms and expected pattern of recovery DesignWe jointly analyzed ten ongoing cohort studies in ten countries for the occurrence of three major symptom clusters of long COVID among representative COVID cases. The defining symptoms of the three clusters (fatigue, cognitive problems, and shortness of breath) are explicitly mentioned in the WHO clinical case definition. For incidence of long COVID, we adopted the minimum duration after infection of three months from the WHO case definition. We pooled data from the contributing studies, two large medical record databases in the United States, and findings from 44 published studies using a Bayesian meta-regression tool. We separately estimated occurrence and pattern of recovery in patients with milder acute infections and those hospitalized. We estimated the incidence and prevalence of long COVID globally and by country in 2020 and 2021 as well as the severity-weighted prevalence using disability weights from the Global Burden of Disease study. ResultsAnalyses are based on detailed information for 1906 community infections and 10526 hospitalized patients from the ten collaborating cohorts, three of which included children. We added published data on 37262 community infections and 9540 hospitalized patients as well as ICD-coded medical record data concerning 1.3 million infections. Globally, in 2020 and 2021, 144.7 million (95% uncertainty interval [UI] 54.8-312.9) people suffered from any of the three symptom clusters of long COVID. This corresponds to 3.69% (1.38-7.96) of all infections. The fatigue, respiratory, and cognitive clusters occurred in 51.0% (16.9-92.4), 60.4% (18.9-89.1), and 35.4% (9.4-75.1) of long COVID cases, respectively. Those with milder acute COVID-19 cases had a quicker estimated recovery (median duration 3.99 months [IQR 3.84-4.20]) than those admitted for the acute infection (median duration 8.84 months [IQR 8.10-9.78]). At twelve months, 15.1% (10.3-21.1) continued to experience long COVID symptoms. Conclusions and relevanceThe occurrence of debilitating ongoing symptoms of COVID-19 is common. Knowing how many people are affected, and for how long, is important to plan for rehabilitative services and support to return to social activities, places of learning, and the workplace when symptoms start to wane. Key PointsO_ST_ABSQuestionC_ST_ABSWhat are the extent and nature of the most common long COVID symptoms by country in 2020 and 2021? FindingsGlobally, 144.7 million people experienced one or more of three symptom clusters (fatigue; cognitive problems; and ongoing respiratory problems) of long COVID three months after infection, in 2020 and 2021. Most cases arose from milder infections. At 12 months after infection, 15.1% of these cases had not yet recovered. MeaningThe substantial number of people with long COVID are in need of rehabilitative care and support to transition back into the workplace or education when symptoms start to wane.

2.
Estee Y Cramer; Evan L Ray; Velma K Lopez; Johannes Bracher; Andrea Brennen; Alvaro J Castro Rivadeneira; Aaron Gerding; Tilmann Gneiting; Katie H House; Yuxin Huang; Dasuni Jayawardena; Abdul H Kanji; Ayush Khandelwal; Khoa Le; Anja Muehlemann; Jarad Niemi; Apurv Shah; Ariane Stark; Yijin Wang; Nutcha Wattanachit; Martha W Zorn; Youyang Gu; Sansiddh Jain; Nayana Bannur; Ayush Deva; Mihir Kulkarni; Srujana Merugu; Alpan Raval; Siddhant Shingi; Avtansh Tiwari; Jerome White; Neil F Abernethy; Spencer Woody; Maytal Dahan; Spencer Fox; Kelly Gaither; Michael Lachmann; Lauren Ancel Meyers; James G Scott; Mauricio Tec; Ajitesh Srivastava; Glover E George; Jeffrey C Cegan; Ian D Dettwiller; William P England; Matthew W Farthing; Robert H Hunter; Brandon Lafferty; Igor Linkov; Michael L Mayo; Matthew D Parno; Michael A Rowland; Benjamin D Trump; Yanli Zhang-James; Samuel Chen; Stephen V Faraone; Jonathan Hess; Christopher P Morley; Asif Salekin; Dongliang Wang; Sabrina M Corsetti; Thomas M Baer; Marisa C Eisenberg; Karl Falb; Yitao Huang; Emily T Martin; Ella McCauley; Robert L Myers; Tom Schwarz; Daniel Sheldon; Graham Casey Gibson; Rose Yu; Liyao Gao; Yian Ma; Dongxia Wu; Xifeng Yan; Xiaoyong Jin; Yu-Xiang Wang; YangQuan Chen; Lihong Guo; Yanting Zhao; Quanquan Gu; Jinghui Chen; Lingxiao Wang; Pan Xu; Weitong Zhang; Difan Zou; Hannah Biegel; Joceline Lega; Steve McConnell; VP Nagraj; Stephanie L Guertin; Christopher Hulme-Lowe; Stephen D Turner; Yunfeng Shi; Xuegang Ban; Robert Walraven; Qi-Jun Hong; Stanley Kong; Axel van de Walle; James A Turtle; Michal Ben-Nun; Steven Riley; Pete Riley; Ugur Koyluoglu; David DesRoches; Pedro Forli; Bruce Hamory; Christina Kyriakides; Helen Leis; John Milliken; Michael Moloney; James Morgan; Ninad Nirgudkar; Gokce Ozcan; Noah Piwonka; Matt Ravi; Chris Schrader; Elizabeth Shakhnovich; Daniel Siegel; Ryan Spatz; Chris Stiefeling; Barrie Wilkinson; Alexander Wong; Sean Cavany; Guido Espana; Sean Moore; Rachel Oidtman; Alex Perkins; David Kraus; Andrea Kraus; Zhifeng Gao; Jiang Bian; Wei Cao; Juan Lavista Ferres; Chaozhuo Li; Tie-Yan Liu; Xing Xie; Shun Zhang; Shun Zheng; Alessandro Vespignani; Matteo Chinazzi; Jessica T Davis; Kunpeng Mu; Ana Pastore y Piontti; Xinyue Xiong; Andrew Zheng; Jackie Baek; Vivek Farias; Andreea Georgescu; Retsef Levi; Deeksha Sinha; Joshua Wilde; Georgia Perakis; Mohammed Amine Bennouna; David Nze-Ndong; Divya Singhvi; Ioannis Spantidakis; Leann Thayaparan; Asterios Tsiourvas; Arnab Sarker; Ali Jadbabaie; Devavrat Shah; Nicolas Della Penna; Leo A Celi; Saketh Sundar; Russ Wolfinger; Dave Osthus; Lauren Castro; Geoffrey Fairchild; Isaac Michaud; Dean Karlen; Matt Kinsey; Luke C. Mullany; Kaitlin Rainwater-Lovett; Lauren Shin; Katharine Tallaksen; Shelby Wilson; Elizabeth C Lee; Juan Dent; Kyra H Grantz; Alison L Hill; Joshua Kaminsky; Kathryn Kaminsky; Lindsay T Keegan; Stephen A Lauer; Joseph C Lemaitre; Justin Lessler; Hannah R Meredith; Javier Perez-Saez; Sam Shah; Claire P Smith; Shaun A Truelove; Josh Wills; Maximilian Marshall; Lauren Gardner; Kristen Nixon; John C. Burant; Lily Wang; Lei Gao; Zhiling Gu; Myungjin Kim; Xinyi Li; Guannan Wang; Yueying Wang; Shan Yu; Robert C Reiner; Ryan Barber; Emmanuela Gaikedu; Simon Hay; Steve Lim; Chris Murray; David Pigott; Heidi L Gurung; Prasith Baccam; Steven A Stage; Bradley T Suchoski; B. Aditya Prakash; Bijaya Adhikari; Jiaming Cui; Alexander Rodriguez; Anika Tabassum; Jiajia Xie; Pinar Keskinocak; John Asplund; Arden Baxter; Buse Eylul Oruc; Nicoleta Serban; Sercan O Arik; Mike Dusenberry; Arkady Epshteyn; Elli Kanal; Long T Le; Chun-Liang Li; Tomas Pfister; Dario Sava; Rajarishi Sinha; Thomas Tsai; Nate Yoder; Jinsung Yoon; Leyou Zhang; Sam Abbott; Nikos I Bosse; Sebastian Funk; Joel Hellewell; Sophie R Meakin; Katharine Sherratt; Mingyuan Zhou; Rahi Kalantari; Teresa K Yamana; Sen Pei; Jeffrey Shaman; Michael L Li; Dimitris Bertsimas; Omar Skali Lami; Saksham Soni; Hamza Tazi Bouardi; Turgay Ayer; Madeline Adee; Jagpreet Chhatwal; Ozden O Dalgic; Mary A Ladd; Benjamin P Linas; Peter Mueller; Jade Xiao; Yuanjia Wang; Qinxia Wang; Shanghong Xie; Donglin Zeng; Alden Green; Jacob Bien; Logan Brooks; Addison J Hu; Maria Jahja; Daniel McDonald; Balasubramanian Narasimhan; Collin Politsch; Samyak Rajanala; Aaron Rumack; Noah Simon; Ryan J Tibshirani; Rob Tibshirani; Valerie Ventura; Larry Wasserman; Eamon B O'Dea; John M Drake; Robert Pagano; Quoc T Tran; Lam Si Tung Ho; Huong Huynh; Jo W Walker; Rachel B Slayton; Michael A Johansson; Matthew Biggerstaff; Nicholas G Reich.
Preprint em Inglês | medRxiv | ID: ppmedrxiv-21250974

RESUMO

Short-term probabilistic forecasts of the trajectory of the COVID-19 pandemic in the United States have served as a visible and important communication channel between the scientific modeling community and both the general public and decision-makers. Forecasting models provide specific, quantitative, and evaluable predictions that inform short-term decisions such as healthcare staffing needs, school closures, and allocation of medical supplies. Starting in April 2020, the US COVID-19 Forecast Hub (https://covid19forecasthub.org/) collected, disseminated, and synthesized tens of millions of specific predictions from more than 90 different academic, industry, and independent research groups. A multi-model ensemble forecast that combined predictions from dozens of different research groups every week provided the most consistently accurate probabilistic forecasts of incident deaths due to COVID-19 at the state and national level from April 2020 through October 2021. The performance of 27 individual models that submitted complete forecasts of COVID-19 deaths consistently throughout this year showed high variability in forecast skill across time, geospatial units, and forecast horizons. Two-thirds of the models evaluated showed better accuracy than a naive baseline model. Forecast accuracy degraded as models made predictions further into the future, with probabilistic error at a 20-week horizon 3-5 times larger than when predicting at a 1-week horizon. This project underscores the role that collaboration and active coordination between governmental public health agencies, academic modeling teams, and industry partners can play in developing modern modeling capabilities to support local, state, and federal response to outbreaks. Significance StatementThis paper compares the probabilistic accuracy of short-term forecasts of reported deaths due to COVID-19 during the first year and a half of the pandemic in the US. Results show high variation in accuracy between and within stand-alone models, and more consistent accuracy from an ensemble model that combined forecasts from all eligible models. This demonstrates that an ensemble model provided a reliable and comparatively accurate means of forecasting deaths during the COVID-19 pandemic that exceeded the performance of all of the models that contributed to it. This work strengthens the evidence base for synthesizing multiple models to support public health action.

3.
Preprint em Inglês | medRxiv | ID: ppmedrxiv-20177493

RESUMO

BackgroundThe COVID-19 pandemic has driven demand for forecasts to guide policy and planning. Previous research has suggested that combining forecasts from multiple models into a single "ensemble" forecast can increase the robustness of forecasts. Here we evaluate the real-time application of an open, collaborative ensemble to forecast deaths attributable to COVID-19 in the U.S. MethodsBeginning on April 13, 2020, we collected and combined one- to four-week ahead forecasts of cumulative deaths for U.S. jurisdictions in standardized, probabilistic formats to generate real-time, publicly available ensemble forecasts. We evaluated the point prediction accuracy and calibration of these forecasts compared to reported deaths. ResultsAnalysis of 2,512 ensemble forecasts made April 27 to July 20 with outcomes observed in the weeks ending May 23 through July 25, 2020 revealed precise short-term forecasts, with accuracy deteriorating at longer prediction horizons of up to four weeks. At all prediction horizons, the prediction intervals were well calibrated with 92-96% of observations falling within the rounded 95% prediction intervals. ConclusionsThis analysis demonstrates that real-time, publicly available ensemble forecasts issued in April-July 2020 provided robust short-term predictions of reported COVID-19 deaths in the United States. With the ongoing need for forecasts of impacts and resource needs for the COVID-19 response, the results underscore the importance of combining multiple probabilistic models and assessing forecast skill at different prediction horizons. Careful development, assessment, and communication of ensemble forecasts can provide reliable insight to public health decision makers.

4.
Preprint em Inglês | medRxiv | ID: ppmedrxiv-20151233

RESUMO

BackgroundForecasts and alternative scenarios of the COVID-19 pandemic have been critical inputs into a range of important decisions by healthcare providers, local and national government agencies and international organizations and actors. Hundreds of COVID-19 models have been released. Decision-makers need information about the predictive performance of these models to help select which ones should be used to guide decision-making. MethodsWe identified 383 published or publicly released COVID-19 forecasting models. Only seven models met the inclusion criteria of: estimating for five or more countries, providing regular updates, forecasting at least 4 weeks from the model release date, estimating mortality, and providing date-versioned sets of previously estimated forecasts. These models included those produced by: a team at MIT (Delphi), Youyang Gu (YYG), the Los Alamos National Laboratory (LANL), Imperial College London (Imperial) the USC Data Science Lab (SIKJalpha), and three models produced by the Institute for Health Metrics and Evaluation (IHME). For each of these models, we examined the median absolute percent error--compared to subsequently observed trends--for weekly and cumulative death forecasts. Errors were stratified by weeks of extrapolation, world region, and month of model estimation. For locations with epidemics showing a clear peak, each models accuracy was also evaluated in predicting the timing of peak daily mortality. ResultsAcross models, the median absolute percent error (MAPE) on cumulative deaths for models released in June rose with increased weeks of extrapolation, from 2.3% at one week to 32.6% at ten weeks. Globally, ten-week MAPE values were lowest for IHME-MS-SEIR (20.3%) and YYG (22.1). Across models, MAPE at six weeks were the highest in Sub-Saharan Africa (55.6%), and the lowest in high-income countries (7.7%). Median absolute errors (MAE) for peak timing also rose with increased forecasting weeks, from 14 days at one week to 30 days at eight weeks. Peak timing MAE at eight weeks ranged from 24 days for the IHME Curve Fit model, to 48 days for LANL. InterpretationFive of the models, from IHME, YYG, Delphi, SIKJalpha and LANL, had less than 20% MAPE at six weeks. Despite the complexities of modelling human behavioural responses and government interventions related to COVID-19, predictions among these better-performing models were surprisingly accurate. Forecasts and alternative scenarios can be a useful input to decision-makers, although users should be aware of increasing errors with a greater amount of extrapolation time, and corresponding steadily widening uncertainty intervals further in the future. The framework and publicly available codebase presented can be routinely used to evaluate the performance of all publicly released models meeting inclusion criteria in the future, and compare current model predictions.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...