Your browser doesn't support javascript.
Show: 20 | 50 | 100
Results 1 - 7 de 7
Filter
1.
Diagn Progn Res ; 6(1): 6, 2022 Feb 24.
Article in English | MEDLINE | ID: covidwho-1702772

ABSTRACT

BACKGROUND: Obtaining accurate estimates of the risk of COVID-19-related death in the general population is challenging in the context of changing levels of circulating infection. METHODS: We propose a modelling approach to predict 28-day COVID-19-related death which explicitly accounts for COVID-19 infection prevalence using a series of sub-studies from new landmark times incorporating time-updating proxy measures of COVID-19 infection prevalence. This was compared with an approach ignoring infection prevalence. The target population was adults registered at a general practice in England in March 2020. The outcome was 28-day COVID-19-related death. Predictors included demographic characteristics and comorbidities. Three proxies of local infection prevalence were used: model-based estimates, rate of COVID-19-related attendances in emergency care, and rate of suspected COVID-19 cases in primary care. We used data within the TPP SystmOne electronic health record system linked to Office for National Statistics mortality data, using the OpenSAFELY platform, working on behalf of NHS England. Prediction models were developed in case-cohort samples with a 100-day follow-up. Validation was undertaken in 28-day cohorts from the target population. We considered predictive performance (discrimination and calibration) in geographical and temporal subsets of data not used in developing the risk prediction models. Simple models were contrasted to models including a full range of predictors. RESULTS: Prediction models were developed on 11,972,947 individuals, of whom 7999 experienced COVID-19-related death. All models discriminated well between individuals who did and did not experience the outcome, including simple models adjusting only for basic demographics and number of comorbidities: C-statistics 0.92-0.94. However, absolute risk estimates were substantially miscalibrated when infection prevalence was not explicitly modelled. CONCLUSIONS: Our proposed models allow absolute risk estimation in the context of changing infection prevalence but predictive performance is sensitive to the proxy for infection prevalence. Simple models can provide excellent discrimination and may simplify implementation of risk prediction tools.

2.
BMC Med Res Methodol ; 22(1): 35, 2022 01 30.
Article in English | MEDLINE | ID: covidwho-1699687

ABSTRACT

BACKGROUND: We investigated whether we could use influenza data to develop prediction models for COVID-19 to increase the speed at which prediction models can reliably be developed and validated early in a pandemic. We developed COVID-19 Estimated Risk (COVER) scores that quantify a patient's risk of hospital admission with pneumonia (COVER-H), hospitalization with pneumonia requiring intensive services or death (COVER-I), or fatality (COVER-F) in the 30-days following COVID-19 diagnosis using historical data from patients with influenza or flu-like symptoms and tested this in COVID-19 patients. METHODS: We analyzed a federated network of electronic medical records and administrative claims data from 14 data sources and 6 countries containing data collected on or before 4/27/2020. We used a 2-step process to develop 3 scores using historical data from patients with influenza or flu-like symptoms any time prior to 2020. The first step was to create a data-driven model using LASSO regularized logistic regression, the covariates of which were used to develop aggregate covariates for the second step where the COVER scores were developed using a smaller set of features. These 3 COVER scores were then externally validated on patients with 1) influenza or flu-like symptoms and 2) confirmed or suspected COVID-19 diagnosis across 5 databases from South Korea, Spain, and the United States. Outcomes included i) hospitalization with pneumonia, ii) hospitalization with pneumonia requiring intensive services or death, and iii) death in the 30 days after index date. RESULTS: Overall, 44,507 COVID-19 patients were included for model validation. We identified 7 predictors (history of cancer, chronic obstructive pulmonary disease, diabetes, heart disease, hypertension, hyperlipidemia, kidney disease) which combined with age and sex discriminated which patients would experience any of our three outcomes. The models achieved good performance in influenza and COVID-19 cohorts. For COVID-19 the AUC ranges were, COVER-H: 0.69-0.81, COVER-I: 0.73-0.91, and COVER-F: 0.72-0.90. Calibration varied across the validations with some of the COVID-19 validations being less well calibrated than the influenza validations. CONCLUSIONS: This research demonstrated the utility of using a proxy disease to develop a prediction model. The 3 COVER models with 9-predictors that were developed using influenza data perform well for COVID-19 patients for predicting hospitalization, intensive services, and fatality. The scores showed good discriminatory performance which transferred well to the COVID-19 population. There was some miscalibration in the COVID-19 validations, which is potentially due to the difference in symptom severity between the two diseases. A possible solution for this is to recalibrate the models in each location before use.


Subject(s)
COVID-19 , Influenza, Human , Pneumonia , COVID-19 Testing , Humans , Influenza, Human/epidemiology , SARS-CoV-2 , United States
3.
BMC Health Serv Res ; 21(1): 957, 2021 Sep 13.
Article in English | MEDLINE | ID: covidwho-1405306

ABSTRACT

BACKGROUND: The novel coronavirus SARS-19 produces 'COVID-19' in patients with symptoms. COVID-19 patients admitted to the hospital require early assessment and care including isolation. The National Early Warning Score (NEWS) and its updated version NEWS2 is a simple physiological scoring system used in hospitals, which may be useful in the early identification of COVID-19 patients. We investigate the performance of multiple enhanced NEWS2 models in predicting the risk of COVID-19. METHODS: Our cohort included unplanned adult medical admissions discharged over 3 months (11 March 2020 to 13 June 2020 ) from two hospitals (YH for model development; SH for external model validation). We used logistic regression to build multiple prediction models for the risk of COVID-19 using the first electronically recorded NEWS2 within ± 24 hours of admission. Model M0' included NEWS2; model M1' included NEWS2 + age + sex, and model M2' extends model M1' with subcomponents of NEWS2 (including diastolic blood pressure + oxygen flow rate + oxygen scale). Model performance was evaluated according to discrimination (c statistic), calibration (graphically), and clinical usefulness at NEWS2 ≥ 5. RESULTS: The prevalence of COVID-19 was higher in SH (11.0 %=277/2520) than YH (8.7 %=343/3924) with a higher first NEWS2 scores ( SH 3.2 vs YH 2.8) but similar in-hospital mortality (SH 8.4 % vs YH 8.2 %). The c-statistics for predicting the risk of COVID-19 for models M0',M1',M2' in the development dataset were: M0': 0.71 (95 %CI 0.68-0.74); M1': 0.67 (95 %CI 0.64-0.70) and M2': 0.78 (95 %CI 0.75-0.80)). For the validation datasets the c-statistics were: M0' 0.65 (95 %CI 0.61-0.68); M1': 0.67 (95 %CI 0.64-0.70) and M2': 0.72 (95 %CI 0.69-0.75) ). The calibration slope was similar across all models but Model M2' had the highest sensitivity (M0' 44 % (95 %CI 38-50 %); M1' 53 % (95 %CI 47-59 %) and M2': 57 % (95 %CI 51-63 %)) and specificity (M0' 75 % (95 %CI 73-77 %); M1' 72 % (95 %CI 70-74 %) and M2': 76 % (95 %CI 74-78 %)) for the validation dataset at NEWS2 ≥ 5. CONCLUSIONS: Model M2' appears to be reasonably accurate for predicting the risk of COVID-19. It may be clinically useful as an early warning system at the time of admission especially to triage large numbers of unplanned hospital admissions.


Subject(s)
COVID-19 , Early Warning Score , Adult , Hospitals , Humans , Patient Admission , Retrospective Studies , SARS-CoV-2
4.
Crit Care Explor ; 3(5): e0402, 2021 May.
Article in English | MEDLINE | ID: covidwho-1254873

ABSTRACT

BACKGROUND: Acute respiratory failure occurs frequently in hospitalized patients and often begins outside the ICU, associated with increased length of stay, cost, and mortality. Delays in decompensation recognition are associated with worse outcomes. OBJECTIVES: The objective of this study is to predict acute respiratory failure requiring any advanced respiratory support (including noninvasive ventilation). With the advent of the coronavirus disease pandemic, concern regarding acute respiratory failure has increased. DERIVATION COHORT: All admission encounters from January 2014 to June 2017 from three hospitals in the Emory Healthcare network (82,699). VALIDATION COHORT: External validation cohort: all admission encounters from January 2014 to June 2017 from a fourth hospital in the Emory Healthcare network (40,143). Temporal validation cohort: all admission encounters from February to April 2020 from four hospitals in the Emory Healthcare network coronavirus disease tested (2,564) and coronavirus disease positive (389). PREDICTION MODEL: All admission encounters had vital signs, laboratory, and demographic data extracted. Exclusion criteria included invasive mechanical ventilation started within the operating room or advanced respiratory support within the first 8 hours of admission. Encounters were discretized into hour intervals from 8 hours after admission to discharge or advanced respiratory support initiation and binary labeled for advanced respiratory support. Prediction of Acute Respiratory Failure requiring advanced respiratory support in Advance of Interventions and Treatment, our eXtreme Gradient Boosting-based algorithm, was compared against Modified Early Warning Score. RESULTS: Prediction of Acute Respiratory Failure requiring advanced respiratory support in Advance of Interventions and Treatment had significantly better discrimination than Modified Early Warning Score (area under the receiver operating characteristic curve 0.85 vs 0.57 [test], 0.84 vs 0.61 [external validation]). Prediction of Acute Respiratory Failure requiring advanced respiratory support in Advance of Interventions and Treatment maintained a positive predictive value (0.31-0.21) similar to that of Modified Early Warning Score greater than 4 (0.29-0.25) while identifying 6.62 (validation) to 9.58 (test) times more true positives. Furthermore, Prediction of Acute Respiratory Failure requiring advanced respiratory support in Advance of Interventions and Treatment performed more effectively in temporal validation (area under the receiver operating characteristic curve 0.86 [coronavirus disease tested], 0.93 [coronavirus disease positive]), while achieving identifying 4.25-4.51× more true positives. CONCLUSIONS: Prediction of Acute Respiratory Failure requiring advanced respiratory support in Advance of Interventions and Treatment is more effective than Modified Early Warning Score in predicting respiratory failure requiring advanced respiratory support at external validation and in coronavirus disease 2019 patients. Silent prospective validation necessary before local deployment.

5.
JMIR Med Inform ; 9(4): e21547, 2021 Apr 05.
Article in English | MEDLINE | ID: covidwho-1195972

ABSTRACT

BACKGROUND: SARS-CoV-2 is straining health care systems globally. The burden on hospitals during the pandemic could be reduced by implementing prediction models that can discriminate patients who require hospitalization from those who do not. The COVID-19 vulnerability (C-19) index, a model that predicts which patients will be admitted to hospital for treatment of pneumonia or pneumonia proxies, has been developed and proposed as a valuable tool for decision-making during the pandemic. However, the model is at high risk of bias according to the "prediction model risk of bias assessment" criteria, and it has not been externally validated. OBJECTIVE: The aim of this study was to externally validate the C-19 index across a range of health care settings to determine how well it broadly predicts hospitalization due to pneumonia in COVID-19 cases. METHODS: We followed the Observational Health Data Sciences and Informatics (OHDSI) framework for external validation to assess the reliability of the C-19 index. We evaluated the model on two different target populations, 41,381 patients who presented with SARS-CoV-2 at an outpatient or emergency department visit and 9,429,285 patients who presented with influenza or related symptoms during an outpatient or emergency department visit, to predict their risk of hospitalization with pneumonia during the following 0-30 days. In total, we validated the model across a network of 14 databases spanning the United States, Europe, Australia, and Asia. RESULTS: The internal validation performance of the C-19 index had a C statistic of 0.73, and the calibration was not reported by the authors. When we externally validated it by transporting it to SARS-CoV-2 data, the model obtained C statistics of 0.36, 0.53 (0.473-0.584) and 0.56 (0.488-0.636) on Spanish, US, and South Korean data sets, respectively. The calibration was poor, with the model underestimating risk. When validated on 12 data sets containing influenza patients across the OHDSI network, the C statistics ranged between 0.40 and 0.68. CONCLUSIONS: Our results show that the discriminative performance of the C-19 index model is low for influenza cohorts and even worse among patients with COVID-19 in the United States, Spain, and South Korea. These results suggest that C-19 should not be used to aid decision-making during the COVID-19 pandemic. Our findings highlight the importance of performing external validation across a range of settings, especially when a prediction model is being extrapolated to a different population. In the field of prediction, extensive validation is required to create appropriate trust in a model.

6.
JACC CardioOncol ; 2(3): 411-413, 2020 Sep.
Article in English | MEDLINE | ID: covidwho-888591
7.
BMJ ; 369: m1328, 2020 04 07.
Article in English | MEDLINE | ID: covidwho-648504

ABSTRACT

OBJECTIVE: To review and appraise the validity and usefulness of published and preprint reports of prediction models for diagnosing coronavirus disease 2019 (covid-19) in patients with suspected infection, for prognosis of patients with covid-19, and for detecting people in the general population at increased risk of covid-19 infection or being admitted to hospital with the disease. DESIGN: Living systematic review and critical appraisal by the COVID-PRECISE (Precise Risk Estimation to optimise covid-19 Care for Infected or Suspected patients in diverse sEttings) group. DATA SOURCES: PubMed and Embase through Ovid, up to 1 July 2020, supplemented with arXiv, medRxiv, and bioRxiv up to 5 May 2020. STUDY SELECTION: Studies that developed or validated a multivariable covid-19 related prediction model. DATA EXTRACTION: At least two authors independently extracted data using the CHARMS (critical appraisal and data extraction for systematic reviews of prediction modelling studies) checklist; risk of bias was assessed using PROBAST (prediction model risk of bias assessment tool). RESULTS: 37 421 titles were screened, and 169 studies describing 232 prediction models were included. The review identified seven models for identifying people at risk in the general population; 118 diagnostic models for detecting covid-19 (75 were based on medical imaging, 10 to diagnose disease severity); and 107 prognostic models for predicting mortality risk, progression to severe disease, intensive care unit admission, ventilation, intubation, or length of hospital stay. The most frequent types of predictors included in the covid-19 prediction models are vital signs, age, comorbidities, and image features. Flu-like symptoms are frequently predictive in diagnostic models, while sex, C reactive protein, and lymphocyte counts are frequent prognostic factors. Reported C index estimates from the strongest form of validation available per model ranged from 0.71 to 0.99 in prediction models for the general population, from 0.65 to more than 0.99 in diagnostic models, and from 0.54 to 0.99 in prognostic models. All models were rated at high or unclear risk of bias, mostly because of non-representative selection of control patients, exclusion of patients who had not experienced the event of interest by the end of the study, high risk of model overfitting, and unclear reporting. Many models did not include a description of the target population (n=27, 12%) or care setting (n=75, 32%), and only 11 (5%) were externally validated by a calibration plot. The Jehi diagnostic model and the 4C mortality score were identified as promising models. CONCLUSION: Prediction models for covid-19 are quickly entering the academic literature to support medical decision making at a time when they are urgently needed. This review indicates that almost all pubished prediction models are poorly reported, and at high risk of bias such that their reported predictive performance is probably optimistic. However, we have identified two (one diagnostic and one prognostic) promising models that should soon be validated in multiple cohorts, preferably through collaborative efforts and data sharing to also allow an investigation of the stability and heterogeneity in their performance across populations and settings. Details on all reviewed models are publicly available at https://www.covprecise.org/. Methodological guidance as provided in this paper should be followed because unreliable predictions could cause more harm than benefit in guiding clinical decisions. Finally, prediction model authors should adhere to the TRIPOD (transparent reporting of a multivariable prediction model for individual prognosis or diagnosis) reporting guideline. SYSTEMATIC REVIEW REGISTRATION: Protocol https://osf.io/ehc47/, registration https://osf.io/wy245. READERS' NOTE: This article is a living systematic review that will be updated to reflect emerging evidence. Updates may occur for up to two years from the date of original publication. This version is update 3 of the original article published on 7 April 2020 (BMJ 2020;369:m1328). Previous updates can be found as data supplements (https://www.bmj.com/content/369/bmj.m1328/related#datasupp). When citing this paper please consider adding the update number and date of access for clarity.


Subject(s)
Coronavirus Infections/diagnosis , Models, Theoretical , Pneumonia, Viral/diagnosis , COVID-19 , Coronavirus , Disease Progression , Hospitalization/statistics & numerical data , Humans , Multivariate Analysis , Pandemics , Prognosis
SELECTION OF CITATIONS
SEARCH DETAIL