Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 2 de 2
Filter
Add more filters










Database
Language
Publication year range
1.
J Gen Intern Med ; 37(15): 3877-3884, 2022 11.
Article in English | MEDLINE | ID: mdl-35028862

ABSTRACT

BACKGROUND: The US Veterans Affairs (VA) healthcare system began reporting risk-adjusted mortality for intensive care (ICU) admissions in 2005. However, while the VA's mortality model has been updated and adapted for risk-adjustment of all inpatient hospitalizations, recent model performance has not been published. We sought to assess the current performance of VA's 4 standardized mortality models: acute care 30-day mortality (acute care SMR-30); ICU 30-day mortality (ICU SMR-30); acute care in-hospital mortality (acute care SMR); and ICU in-hospital mortality (ICU SMR). METHODS: Retrospective cohort study with split derivation and validation samples. Standardized mortality models were fit using derivation data, with coefficients applied to the validation sample. Nationwide VA hospitalizations that met model inclusion criteria during fiscal years 2017-2018(derivation) and 2019 (validation) were included. Model performance was evaluated using c-statistics to assess discrimination and comparison of observed versus predicted deaths to assess calibration. RESULTS: Among 1,143,351 hospitalizations eligible for the acute care SMR-30 during 2017-2019, in-hospital mortality was 1.8%, and 30-day mortality was 4.3%. C-statistics for the SMR models in validation data were 0.870 (acute care SMR-30); 0.864 (ICU SMR-30); 0.914 (acute care SMR); and 0.887 (ICU SMR). There were 16,036 deaths (4.29% mortality) in the SMR-30 validation cohort versus 17,458 predicted deaths (4.67%), reflecting 0.38% over-prediction. Across deciles of predicted risk, the absolute difference in observed versus predicted percent mortality was a mean of 0.38%, with a maximum error of 1.81% seen in the highest-risk decile. CONCLUSIONS AND RELEVANCE: The VA's SMR models, which incorporate patient physiology on presentation, are highly predictive and demonstrate good calibration both overall and across risk deciles. The current SMR models perform similarly to the initial ICU SMR model, indicating appropriate adaption and re-calibration.


Subject(s)
Intensive Care Units , Veterans , Humans , Retrospective Studies , Hospital Mortality , Delivery of Health Care
2.
Med Care ; 50(6): 520-6, 2012 Jun.
Article in English | MEDLINE | ID: mdl-22584887

ABSTRACT

INTRODUCTION: Reliance on administrative data sources and a cohort with restricted age range (Medicare 65 y and above) may limit conclusions drawn from public reporting of 30-day mortality rates in 3 diagnoses [acute myocardial infarction (AMI), congestive heart failure (CHF), pneumonia (PNA)] from Center for Medicaid and Medicare Services. METHODS: We categorized patients with diagnostic codes for AMI, CHF, and PNA admitted to 138 Veterans Administration hospitals (2006-2009) into 2 groups (less than 65 y or ALL), then applied 3 different models that predicted 30-day mortality [Center for Medicaid and Medicare Services administrative (ADM), ADM+laboratory data (PLUS), and clinical (CLIN)] to each age/diagnosis group. C statistic (CSTAT) and Hosmer Lemeshow Goodness of Fit measured discrimination and calibration. Pearson correlation coefficient (r) compared relationship between the hospitals' risk-standardized mortality rates (RSMRs) calculated with different models. Hospitals were rated as significantly different (SD) when confidence intervals (bootstrapping) omitted National RSMR. RESULTS: The ≥ 65-year models included 57%-67% of all patients (78%-82% deaths). The PLUS models improved discrimination and calibration across diagnoses and age groups (CSTAT-CHF/65 y and above: 0.67 vs. 0. 773 vs. 0.761; ADM/PLUS/CLIN; Hosmer Lemeshow Goodness of Fit significant 4/6 ADM vs. 2/6 PLUS). Correlation of RSMR was good between ADM and PLUS (r-AMI 0.859; CHF 0.821; PNA 0.750), and 65 years and above and ALL (r>0.90). SD ratings changed in 1%-12% of hospitals (greatest change in PNA). CONCLUSIONS: Performance measurement systems should include laboratory data, which improve model performance. Changes in SD ratings suggest caution in using a single metric to label hospital performance.


Subject(s)
Centers for Medicare and Medicaid Services, U.S./statistics & numerical data , Data Collection/methods , Heart Failure/mortality , Myocardial Infarction/mortality , Pneumonia/mortality , Age Factors , Aged , Clinical Laboratory Techniques , Comorbidity , Hospitals, Veterans , Humans , Models, Statistical , Risk Adjustment , United States/epidemiology
SELECTION OF CITATIONS
SEARCH DETAIL
...