Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 7 de 7
Filter
Add more filters










Database
Language
Publication year range
1.
J Clin Epidemiol ; 172: 111387, 2024 May 08.
Article in English | MEDLINE | ID: mdl-38729274

ABSTRACT

Clinical prediction models provide risks of health outcomes that can inform patients and support medical decisions. However, most models never make it to actual implementation in practice. A commonly heard reason for this lack of implementation is that prediction models are often not externally validated. While we generally encourage external validation, we argue that an external validation is often neither sufficient nor required as an essential step before implementation. As such, any available external validation should not be perceived as a license for model implementation. We clarify this argument by discussing 3 common misconceptions about external validation. We argue that there is not one type of recommended validation design, not always a necessity for external validation, and sometimes a need for multiple external validations. The insights from this paper can help readers to consider, design, interpret, and appreciate external validation studies.

2.
Eur Heart J ; 44(46): 4831-4834, 2023 Dec 07.
Article in English | MEDLINE | ID: mdl-37897346

ABSTRACT

To raise the quality of clinical artificial intelligence (AI) prediction modelling studies in the cardiovascular health domain and thereby improve their impact and relevancy, the editors for digital health, innovation, and quality standards of the European Heart Journal propose five minimal quality criteria for AI-based prediction model development and validation studies: complete reporting, carefully defined intended use of the model, rigorous validation, large enough sample size, and openness of code and software.


Subject(s)
Artificial Intelligence , Software , Humans , Heart
3.
BMJ ; 378: e069881, 2022 07 12.
Article in English | MEDLINE | ID: mdl-35820692

ABSTRACT

OBJECTIVE: To externally validate various prognostic models and scoring rules for predicting short term mortality in patients admitted to hospital for covid-19. DESIGN: Two stage individual participant data meta-analysis. SETTING: Secondary and tertiary care. PARTICIPANTS: 46 914 patients across 18 countries, admitted to a hospital with polymerase chain reaction confirmed covid-19 from November 2019 to April 2021. DATA SOURCES: Multiple (clustered) cohorts in Brazil, Belgium, China, Czech Republic, Egypt, France, Iran, Israel, Italy, Mexico, Netherlands, Portugal, Russia, Saudi Arabia, Spain, Sweden, United Kingdom, and United States previously identified by a living systematic review of covid-19 prediction models published in The BMJ, and through PROSPERO, reference checking, and expert knowledge. MODEL SELECTION AND ELIGIBILITY CRITERIA: Prognostic models identified by the living systematic review and through contacting experts. A priori models were excluded that had a high risk of bias in the participant domain of PROBAST (prediction model study risk of bias assessment tool) or for which the applicability was deemed poor. METHODS: Eight prognostic models with diverse predictors were identified and validated. A two stage individual participant data meta-analysis was performed of the estimated model concordance (C) statistic, calibration slope, calibration-in-the-large, and observed to expected ratio (O:E) across the included clusters. MAIN OUTCOME MEASURES: 30 day mortality or in-hospital mortality. RESULTS: Datasets included 27 clusters from 18 different countries and contained data on 46 914patients. The pooled estimates ranged from 0.67 to 0.80 (C statistic), 0.22 to 1.22 (calibration slope), and 0.18 to 2.59 (O:E ratio) and were prone to substantial between study heterogeneity. The 4C Mortality Score by Knight et al (pooled C statistic 0.80, 95% confidence interval 0.75 to 0.84, 95% prediction interval 0.72 to 0.86) and clinical model by Wang et al (0.77, 0.73 to 0.80, 0.63 to 0.87) had the highest discriminative ability. On average, 29% fewer deaths were observed than predicted by the 4C Mortality Score (pooled O:E 0.71, 95% confidence interval 0.45 to 1.11, 95% prediction interval 0.21 to 2.39), 35% fewer than predicted by the Wang clinical model (0.65, 0.52 to 0.82, 0.23 to 1.89), and 4% fewer than predicted by Xie et al's model (0.96, 0.59 to 1.55, 0.21 to 4.28). CONCLUSION: The prognostic value of the included models varied greatly between the data sources. Although the Knight 4C Mortality Score and Wang clinical model appeared most promising, recalibration (intercept and slope updates) is needed before implementation in routine care.


Subject(s)
COVID-19 , Models, Statistical , Data Analysis , Hospital Mortality , Humans , Prognosis
5.
PLoS One ; 17(4): e0266750, 2022.
Article in English | MEDLINE | ID: mdl-35404964

ABSTRACT

OBJECTIVES: Cardiovascular conditions were shown to be predictive of clinical deterioration in hospitalised patients with coronavirus disease 2019 (COVID-19). Whether this also holds for outpatients managed in primary care is yet unknown. The aim of this study was to determine the incremental value of cardiovascular vulnerability in predicting the risk of hospital referral in primary care COVID-19 outpatients. DESIGN: Analysis of anonymised routine care data extracted from electronic medical records from three large Dutch primary care registries. SETTING: Primary care. PARTICIPANTS: Consecutive adult patients seen in primary care for COVID-19 symptoms in the 'first wave' of COVID-19 infections (March 1 2020 to June 1 2020) and in the 'second wave' (June 1 2020 to April 15 2021) in the Netherlands. OUTCOME MEASURES: A multivariable logistic regression model was fitted to predict hospital referral within 90 days after first COVID-19 consultation in primary care. Data from the 'first wave' was used for derivation (n = 5,475 patients). Age, sex, the interaction between age and sex, and the number of cardiovascular conditions and/or diabetes (0, 1, or ≥2) were pre-specified as candidate predictors. This full model was (i) compared to a simple model including only age and sex and its interaction, and (ii) externally validated in COVID-19 patients during the 'second wave' (n = 16,693). RESULTS: The full model performed better than the simple model (likelihood ratio test p<0.001). Older male patients with multiple cardiovascular conditions and/or diabetes had the highest predicted risk of hospital referral, reaching risks above 15-20%, whereas on average this risk was 5.1%. The temporally validated c-statistic was 0.747 (95%CI 0.729-0.764) and the model showed good calibration upon validation. CONCLUSIONS: For patients with COVID-19 symptoms managed in primary care, the risk of hospital referral was on average 5.1%. Older, male and cardiovascular vulnerable COVID-19 patients are more at risk for hospital referral.


Subject(s)
COVID-19 , Clinical Deterioration , Adult , COVID-19/epidemiology , COVID-19/therapy , Hospitalization , Humans , Male , Primary Health Care , SARS-CoV-2
6.
BMJ ; 369: m1328, 2020 04 07.
Article in English | MEDLINE | ID: mdl-32265220

ABSTRACT

OBJECTIVE: To review and appraise the validity and usefulness of published and preprint reports of prediction models for diagnosing coronavirus disease 2019 (covid-19) in patients with suspected infection, for prognosis of patients with covid-19, and for detecting people in the general population at increased risk of covid-19 infection or being admitted to hospital with the disease. DESIGN: Living systematic review and critical appraisal by the COVID-PRECISE (Precise Risk Estimation to optimise covid-19 Care for Infected or Suspected patients in diverse sEttings) group. DATA SOURCES: PubMed and Embase through Ovid, up to 1 July 2020, supplemented with arXiv, medRxiv, and bioRxiv up to 5 May 2020. STUDY SELECTION: Studies that developed or validated a multivariable covid-19 related prediction model. DATA EXTRACTION: At least two authors independently extracted data using the CHARMS (critical appraisal and data extraction for systematic reviews of prediction modelling studies) checklist; risk of bias was assessed using PROBAST (prediction model risk of bias assessment tool). RESULTS: 37 421 titles were screened, and 169 studies describing 232 prediction models were included. The review identified seven models for identifying people at risk in the general population; 118 diagnostic models for detecting covid-19 (75 were based on medical imaging, 10 to diagnose disease severity); and 107 prognostic models for predicting mortality risk, progression to severe disease, intensive care unit admission, ventilation, intubation, or length of hospital stay. The most frequent types of predictors included in the covid-19 prediction models are vital signs, age, comorbidities, and image features. Flu-like symptoms are frequently predictive in diagnostic models, while sex, C reactive protein, and lymphocyte counts are frequent prognostic factors. Reported C index estimates from the strongest form of validation available per model ranged from 0.71 to 0.99 in prediction models for the general population, from 0.65 to more than 0.99 in diagnostic models, and from 0.54 to 0.99 in prognostic models. All models were rated at high or unclear risk of bias, mostly because of non-representative selection of control patients, exclusion of patients who had not experienced the event of interest by the end of the study, high risk of model overfitting, and unclear reporting. Many models did not include a description of the target population (n=27, 12%) or care setting (n=75, 32%), and only 11 (5%) were externally validated by a calibration plot. The Jehi diagnostic model and the 4C mortality score were identified as promising models. CONCLUSION: Prediction models for covid-19 are quickly entering the academic literature to support medical decision making at a time when they are urgently needed. This review indicates that almost all pubished prediction models are poorly reported, and at high risk of bias such that their reported predictive performance is probably optimistic. However, we have identified two (one diagnostic and one prognostic) promising models that should soon be validated in multiple cohorts, preferably through collaborative efforts and data sharing to also allow an investigation of the stability and heterogeneity in their performance across populations and settings. Details on all reviewed models are publicly available at https://www.covprecise.org/. Methodological guidance as provided in this paper should be followed because unreliable predictions could cause more harm than benefit in guiding clinical decisions. Finally, prediction model authors should adhere to the TRIPOD (transparent reporting of a multivariable prediction model for individual prognosis or diagnosis) reporting guideline. SYSTEMATIC REVIEW REGISTRATION: Protocol https://osf.io/ehc47/, registration https://osf.io/wy245. READERS' NOTE: This article is a living systematic review that will be updated to reflect emerging evidence. Updates may occur for up to two years from the date of original publication. This version is update 3 of the original article published on 7 April 2020 (BMJ 2020;369:m1328). Previous updates can be found as data supplements (https://www.bmj.com/content/369/bmj.m1328/related#datasupp). When citing this paper please consider adding the update number and date of access for clarity.


Subject(s)
Coronavirus Infections/diagnosis , Models, Theoretical , Pneumonia, Viral/diagnosis , COVID-19 , Coronavirus , Disease Progression , Hospitalization/statistics & numerical data , Humans , Multivariate Analysis , Pandemics , Prognosis
7.
Eur J Radiol ; 112: 200-206, 2019 Mar.
Article in English | MEDLINE | ID: mdl-30777211

ABSTRACT

Interstitial lung disease (ILD) is highly prevalent in collagen vascular diseases and reduction of ILD is an important therapeutic target. To that end, reliable quantification of pulmonary disease severity is of great significance. This study systematically reviewed the literature on automated computed tomography (CT) quantification methods for assessing ILD in collagen vascular diseases. PRISMA-DTA guidelines for systematic reviews were used and 19 original research articles up to January 2018 were included based on a MEDLINE/Pubmed and Embase search. Quantitative CT methods were categorized as histogram assessment (12 studies) or pattern/texture recognition (7 studies). R2 for correlation with visual ILD scoring ranged from 0.143 (p < 0.01) to 0.687 (p < 0.0001), for FVC from 0.048 (p < 0.0001) to 0.504 (p < 0.0001) and for DLCO from 0.015 (p = 0.61) to 0.449 (p < 0.0001). Automated CT methods are independent of reader's expertise and are a promising tool in the quantification of ILD in collagen vascular disease patients.


Subject(s)
Lung Diseases, Interstitial/diagnostic imaging , Vascular Diseases/diagnostic imaging , Collagen Diseases/diagnostic imaging , Humans , Lung/diagnostic imaging , Tomography, X-Ray Computed/methods
SELECTION OF CITATIONS
SEARCH DETAIL
...