Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 112
Filter
1.
medRxiv ; 2024 Mar 19.
Article in English | MEDLINE | ID: mdl-38562803

ABSTRACT

Rationale: Early detection of clinical deterioration using early warning scores may improve outcomes. However, most implemented scores were developed using logistic regression, only underwent retrospective internal validation, and were not tested in important patient subgroups. Objectives: To develop a gradient boosted machine model (eCARTv5) for identifying clinical deterioration and then validate externally, test prospectively, and evaluate across patient subgroups. Methods: All adult patients hospitalized on the wards in seven hospitals from 2008- 2022 were used to develop eCARTv5, with demographics, vital signs, clinician documentation, and laboratory values utilized to predict intensive care unit transfer or death in the next 24 hours. The model was externally validated retrospectively in 21 hospitals from 2009-2023 and prospectively in 10 hospitals from February to May 2023. eCARTv5 was compared to the Modified Early Warning Score (MEWS) and the National Early Warning Score (NEWS) using the area under the receiver operating characteristic curve (AUROC). Measurements and Main Results: The development cohort included 901,491 admissions, the retrospective validation cohort included 1,769,461 admissions, and the prospective validation cohort included 46,330 admissions. In retrospective validation, eCART had the highest AUROC (0.835; 95%CI 0.834, 0.835), followed by NEWS (0.766 (95%CI 0.766, 0.767)), and MEWS (0.704 (95%CI 0.703, 0.704)). eCART's performance remained high (AUROC ≥0.80) across a range of patient demographics, clinical conditions, and during prospective validation. Conclusions: We developed eCARTv5, which accurately identifies early clinical deterioration in hospitalized ward patients. Our model performed better than the NEWS and MEWS retrospectively, prospectively, and across a range of subgroups.

2.
J Am Med Inform Assoc ; 31(6): 1322-1330, 2024 May 20.
Article in English | MEDLINE | ID: mdl-38679906

ABSTRACT

OBJECTIVES: To compare and externally validate popular deep learning model architectures and data transformation methods for variable-length time series data in 3 clinical tasks (clinical deterioration, severe acute kidney injury [AKI], and suspected infection). MATERIALS AND METHODS: This multicenter retrospective study included admissions at 2 medical centers that spanned 2007-2022. Distinct datasets were created for each clinical task, with 1 site used for training and the other for testing. Three feature engineering methods (normalization, standardization, and piece-wise linear encoding with decision trees [PLE-DTs]) and 3 architectures (long short-term memory/gated recurrent unit [LSTM/GRU], temporal convolutional network, and time-distributed wrapper with convolutional neural network [TDW-CNN]) were compared in each clinical task. Model discrimination was evaluated using the area under the precision-recall curve (AUPRC) and the area under the receiver operating characteristic curve (AUROC). RESULTS: The study comprised 373 825 admissions for training and 256 128 admissions for testing. LSTM/GRU models tied with TDW-CNN models with both obtaining the highest mean AUPRC in 2 tasks, and LSTM/GRU had the highest mean AUROC across all tasks (deterioration: 0.81, AKI: 0.92, infection: 0.87). PLE-DT with LSTM/GRU achieved the highest AUPRC in all tasks. DISCUSSION: When externally validated in 3 clinical tasks, the LSTM/GRU model architecture with PLE-DT transformed data demonstrated the highest AUPRC in all tasks. Multiple models achieved similar performance when evaluated using AUROC. CONCLUSION: The LSTM architecture performs as well or better than some newer architectures, and PLE-DT may enhance the AUPRC in variable-length time series data for predicting clinical outcomes during external validation.


Subject(s)
Deep Learning , Humans , Retrospective Studies , Acute Kidney Injury , Neural Networks, Computer , ROC Curve , Male , Datasets as Topic , Female , Middle Aged
3.
Resuscitation ; 197: 110161, 2024 Apr.
Article in English | MEDLINE | ID: mdl-38428721

ABSTRACT

AIM: Hospital rapid response systems aim to stop preventable cardiac arrests, but defining preventability is a challenge. We developed a multidisciplinary consensus-based process to determine in-hospital cardiac arrest (IHCA) preventability based on objective measures. METHODS: We developed an interdisciplinary ward IHCA debriefing program at an urban quaternary-care academic hospital. This group systematically reviewed all IHCAs weekly, reaching consensus determinations of the IHCA's cause and preventability across three mutually exclusive categories: 1) unpredictable (no evidence of physiologic instability < 1 h prior to and within 24 h of the arrest), 2) predictable but unpreventable (meeting physiologic instability criteria in the setting of either a poor baseline prognosis or a documented goals of care conversation) or 3) potentially preventable (remaining cases). RESULTS: Of 544 arrests between 09/2015 and 11/2023, 339 (61%) were deemed predictable by consensus, with 235 (42% of all IHCAs) considered potentially preventable. Potentially preventable arrests disproportionately occurred on nights and weekends (70% vs 55%, p = 0.002) and were more frequently respiratory than cardiac in etiology (33% vs 15%, p < 0.001). Despite similar rates of ROSC across groups (67-70%), survival to discharge was highest in arrests deemed unpredictable (31%), followed by potentially preventable (21%), and then those deemed predictable but unpreventable which had the lowest survival rate (16%, p = 0.007). CONCLUSIONS: Our IHCA debriefing procedures are a feasible and sustainable means of determining the predictability and potential preventability of ward cardiac arrests. This approach may be useful for improving quality benchmarks and care processes around pre-arrest clinical activities.


Subject(s)
Cardiopulmonary Resuscitation , Heart Arrest , Humans , Cardiopulmonary Resuscitation/methods , Consensus , Heart Arrest/prevention & control , Patient Discharge , Hospitals
4.
Crit Care Explor ; 6(3): e1066, 2024 Mar.
Article in English | MEDLINE | ID: mdl-38505174

ABSTRACT

OBJECTIVES: Alcohol withdrawal syndrome (AWS) may progress to require high-intensity care. Approaches to identify hospitalized patients with AWS who received higher level of care have not been previously examined. This study aimed to examine the utility of Clinical Institute Withdrawal Assessment Alcohol Revised (CIWA-Ar) for alcohol scale scores and medication doses for alcohol withdrawal management in identifying patients who received high-intensity care. DESIGN: A multicenter observational cohort study of hospitalized adults with alcohol withdrawal. SETTING: University of Chicago Medical Center and University of Wisconsin Hospital. PATIENTS: Inpatient encounters between November 2008 and February 2022 with a CIWA-Ar score greater than 0 and benzodiazepine or barbiturate administered within the first 24 hours. The primary composite outcome was patients who progressed to high-intensity care (intermediate care or ICU). INTERVENTIONS: None. MAIN RESULTS: Among the 8742 patients included in the study, 37.5% (n = 3280) progressed to high-intensity care. The odds ratio for the composite outcome increased above 1.0 when the CIWA-Ar score was 24. The sensitivity, specificity, positive predictive value (PPV), and negative predictive value (NPV) at this threshold were 0.12 (95% CI, 0.11-0.13), 0.95 (95% CI, 0.94-0.95), 0.58 (95% CI, 0.54-0.61), and 0.64 (95% CI, 0.63-0.65), respectively. The OR increased above 1.0 at a 24-hour lorazepam milligram equivalent dose cutoff of 15 mg. The sensitivity, specificity, PPV, and NPV at this threshold were 0.16 (95% CI, 0.14-0.17), 0.96 (95% CI, 0.95-0.96), 0.68 (95% CI, 0.65-0.72), and 0.65 (95% CI, 0.64-0.66), respectively. CONCLUSIONS: Neither CIWA-Ar scores nor medication dose cutoff points were effective measures for identifying patients with alcohol withdrawal who received high-intensity care. Research studies for examining outcomes in patients who deteriorate with AWS will require better methods for cohort identification.

5.
medRxiv ; 2024 Feb 06.
Article in English | MEDLINE | ID: mdl-38370788

ABSTRACT

OBJECTIVE: Timely intervention for clinically deteriorating ward patients requires that care teams accurately diagnose and treat their underlying medical conditions. However, the most common diagnoses leading to deterioration and the relevant therapies provided are poorly characterized. Therefore, we aimed to determine the diagnoses responsible for clinical deterioration, the relevant diagnostic tests ordered, and the treatments administered among high-risk ward patients using manual chart review. DESIGN: Multicenter retrospective observational study. SETTING: Inpatient medical-surgical wards at four health systems from 2006-2020 PATIENTS: Randomly selected patients (1,000 from each health system) with clinical deterioration, defined by reaching the 95th percentile of a validated early warning score, electronic Cardiac Arrest Risk Triage (eCART), were included. INTERVENTIONS: None. MEASUREMENTS AND MAIN RESULTS: Clinical deterioration was confirmed by a trained reviewer or marked as a false alarm if no deterioration occurred for each patient. For true deterioration events, the condition causing deterioration, relevant diagnostic tests ordered, and treatments provided were collected. Of the 4,000 included patients, 2,484 (62%) had clinical deterioration confirmed by chart review. Sepsis was the most common cause of deterioration (41%; n=1,021), followed by arrhythmia (19%; n=473), while liver failure had the highest in-hospital mortality (41%). The most common diagnostic tests ordered were complete blood counts (47% of events), followed by chest x-rays (42%), and cultures (40%), while the most common medication orders were antimicrobials (46%), followed by fluid boluses (34%), and antiarrhythmics (19%). CONCLUSIONS: We found that sepsis was the most common cause of deterioration, while liver failure had the highest mortality. Complete blood counts and chest x-rays were the most common diagnostic tests ordered, and antimicrobials and fluid boluses were the most common medication interventions. These results provide important insights for clinical decision-making at the bedside, training of rapid response teams, and the development of institutional treatment pathways for clinical deterioration. KEY POINTS: Question: What are the most common diagnoses, diagnostic test orders, and treatments for ward patients experiencing clinical deterioration? Findings: In manual chart review of 2,484 encounters with deterioration across four health systems, we found that sepsis was the most common cause of clinical deterioration, followed by arrythmias, while liver failure had the highest mortality. Complete blood counts and chest x-rays were the most common diagnostic test orders, while antimicrobials and fluid boluses were the most common treatments. Meaning: Our results provide new insights into clinical deterioration events, which can inform institutional treatment pathways, rapid response team training, and patient care.

8.
Am J Respir Crit Care Med ; 207(10): 1300-1309, 2023 05 15.
Article in English | MEDLINE | ID: mdl-36449534

ABSTRACT

Rationale: Despite etiologic and severity heterogeneity in neutropenic sepsis, management is often uniform. Understanding host response clinical subphenotypes might inform treatment strategies for neutropenic sepsis. Objectives: In this retrospective two-hospital study, we analyzed whether temperature trajectory modeling could identify distinct, clinically relevant subphenotypes among oncology patients with neutropenia and suspected infection. Methods: Among adult oncologic admissions with neutropenia and blood cultures within 24 hours, a previously validated model classified patients' initial 72-hour temperature trajectories into one of four subphenotypes. We analyzed subphenotypes' independent relationships with hospital mortality and bloodstream infection using multivariable models. Measurements and Main Results: Patients (primary cohort n = 1,145, validation cohort n = 6,564) fit into one of four temperature subphenotypes. "Hyperthermic slow resolvers" (pooled n = 1,140 [14.8%], mortality n = 104 [9.1%]) and "hypothermic" encounters (n = 1,612 [20.9%], mortality n = 138 [8.6%]) had higher mortality than "hyperthermic fast resolvers" (n = 1,314 [17.0%], mortality n = 47 [3.6%]) and "normothermic" (n = 3,643 [47.3%], mortality n = 196 [5.4%]) encounters (P < 0.001). Bloodstream infections were more common among hyperthermic slow resolvers (n = 248 [21.8%]) and hyperthermic fast resolvers (n = 240 [18.3%]) than among hypothermic (n = 188 [11.7%]) or normothermic (n = 418 [11.5%]) encounters (P < 0.001). Adjusted for confounders, hyperthermic slow resolvers had increased adjusted odds for mortality (primary cohort odds ratio, 1.91 [P = 0.03]; validation cohort odds ratio, 2.19 [P < 0.001]) and bloodstream infection (primary odds ratio, 1.54 [P = 0.04]; validation cohort odds ratio, 2.15 [P < 0.001]). Conclusions: Temperature trajectory subphenotypes were independently associated with important outcomes among hospitalized patients with neutropenia in two independent cohorts.


Subject(s)
Neoplasms , Neutropenia , Sepsis , Adult , Humans , Retrospective Studies , Temperature , Neutropenia/complications , Sepsis/complications , Fever , Neoplasms/complications , Neoplasms/therapy
9.
J Am Med Inform Assoc ; 29(10): 1696-1704, 2022 09 12.
Article in English | MEDLINE | ID: mdl-35869954

ABSTRACT

OBJECTIVES: Early identification of infection improves outcomes, but developing models for early identification requires determining infection status with manual chart review, limiting sample size. Therefore, we aimed to compare semi-supervised and transfer learning algorithms with algorithms based solely on manual chart review for identifying infection in hospitalized patients. MATERIALS AND METHODS: This multicenter retrospective study of admissions to 6 hospitals included "gold-standard" labels of infection from manual chart review and "silver-standard" labels from nonchart-reviewed patients using the Sepsis-3 infection criteria based on antibiotic and culture orders. "Gold-standard" labeled admissions were randomly allocated to training (70%) and testing (30%) datasets. Using patient characteristics, vital signs, and laboratory data from the first 24 hours of admission, we derived deep learning and non-deep learning models using transfer learning and semi-supervised methods. Performance was compared in the gold-standard test set using discrimination and calibration metrics. RESULTS: The study comprised 432 965 admissions, of which 2724 underwent chart review. In the test set, deep learning and non-deep learning approaches had similar discrimination (area under the receiver operating characteristic curve of 0.82). Semi-supervised and transfer learning approaches did not improve discrimination over models fit using only silver- or gold-standard data. Transfer learning had the best calibration (unreliability index P value: .997, Brier score: 0.173), followed by self-learning gradient boosted machine (P value: .67, Brier score: 0.170). DISCUSSION: Deep learning and non-deep learning models performed similarly for identifying infection, as did models developed using Sepsis-3 and manual chart review labels. CONCLUSION: In a multicenter study of almost 3000 chart-reviewed patients, semi-supervised and transfer learning models showed similar performance for model discrimination as baseline XGBoost, while transfer learning improved calibration.


Subject(s)
Machine Learning , Sepsis , Humans , ROC Curve , Retrospective Studies , Sepsis/diagnosis
10.
Crit Care Med ; 50(9): 1339-1347, 2022 09 01.
Article in English | MEDLINE | ID: mdl-35452010

ABSTRACT

OBJECTIVES: To determine the impact of a machine learning early warning risk score, electronic Cardiac Arrest Risk Triage (eCART), on mortality for elevated-risk adult inpatients. DESIGN: A pragmatic pre- and post-intervention study conducted over the same 10-month period in 2 consecutive years. SETTING: Four-hospital community-academic health system. PATIENTS: All adult patients admitted to a medical-surgical ward. INTERVENTIONS: During the baseline period, clinicians were blinded to eCART scores. During the intervention period, scores were presented to providers. Scores greater than or equal to 95th percentile were designated high risk prompting a physician assessment for ICU admission. Scores between the 89th and 95th percentiles were designated intermediate risk, triggering a nurse-directed workflow that included measuring vital signs every 2 hours and contacting a physician to review the treatment plan. MEASUREMENTS AND MAIN RESULTS: The primary outcome was all-cause inhospital mortality. Secondary measures included vital sign assessment within 2 hours, ICU transfer rate, and time to ICU transfer. A total of 60,261 patients were admitted during the study period, of which 6,681 (11.1%) met inclusion criteria (baseline period n = 3,191, intervention period n = 3,490). The intervention period was associated with a significant decrease in hospital mortality for the main cohort (8.8% vs 13.9%; p < 0.0001; adjusted odds ratio [OR], 0.60 [95% CI, 0.52-0.71]). A significant decrease in mortality was also seen for the average-risk cohort not subject to the intervention (0.49% vs 0.26%; p < 0.05; adjusted OR, 0.53 [95% CI, 0.41-0.74]). In subgroup analysis, the benefit was seen in both high- (17.9% vs 23.9%; p = 0.001) and intermediate-risk (2.0% vs 4.0 %; p = 0.005) patients. The intervention period was also associated with a significant increase in ICU transfers, decrease in time to ICU transfer, and increase in vital sign reassessment within 2 hours. CONCLUSIONS: Implementation of a machine learning early warning score-driven protocol was associated with reduced inhospital mortality, likely driven by earlier and more frequent ICU transfer.


Subject(s)
Early Warning Score , Heart Arrest , Adult , Heart Arrest/diagnosis , Heart Arrest/therapy , Hospital Mortality , Humans , Intensive Care Units , Machine Learning , Vital Signs
11.
BMC Pregnancy Childbirth ; 22(1): 295, 2022 Apr 06.
Article in English | MEDLINE | ID: mdl-35387624

ABSTRACT

BACKGROUND: Early warning scores are designed to identify hospitalized patients who are at high risk of clinical deterioration. Although many general scores have been developed for the medical-surgical wards, specific scores have also been developed for obstetric patients due to differences in normal vital sign ranges and potential complications in this unique population. The comparative performance of general and obstetric early warning scores for predicting deterioration and infection on the maternal wards is not known. METHODS: This was an observational cohort study at the University of Chicago that included patients hospitalized on obstetric wards from November 2008 to December 2018. Obstetric scores (modified early obstetric warning system (MEOWS), maternal early warning criteria (MEWC), and maternal early warning trigger (MEWT)), paper-based general scores (Modified Early Warning Score (MEWS) and National Early Warning Score (NEWS), and a general score developed using machine learning (electronic Cardiac Arrest Risk Triage (eCART) score) were compared using the area under the receiver operating characteristic score (AUC) for predicting ward to intensive care unit (ICU) transfer and/or death and new infection. RESULTS: A total of 19,611 patients were included, with 43 (0.2%) experiencing deterioration (ICU transfer and/or death) and 88 (0.4%) experiencing an infection. eCART had the highest discrimination for deterioration (p < 0.05 for all comparisons), with an AUC of 0.86, followed by MEOWS (0.74), NEWS (0.72), MEWC (0.71), MEWS (0.70), and MEWT (0.65). MEWC, MEWT, and MEOWS had higher accuracy than MEWS and NEWS but lower accuracy than eCART at specific cut-off thresholds. For predicting infection, eCART (AUC 0.77) had the highest discrimination. CONCLUSIONS: Within the limitations of our retrospective study, eCART had the highest accuracy for predicting deterioration and infection in our ante- and postpartum patient population. Maternal early warning scores were more accurate than MEWS and NEWS. While institutional choice of an early warning system is complex, our results have important implications for the risk stratification of maternal ward patients, especially since the low prevalence of events means that small improvements in accuracy can lead to large decreases in false alarms.


Subject(s)
Clinical Deterioration , Early Warning Score , Heart Arrest , Female , Heart Arrest/diagnosis , Humans , Intensive Care Units , Pregnancy , ROC Curve , Retrospective Studies , Risk Assessment/methods
12.
Circ Cardiovasc Qual Outcomes ; 15(4): e008900, 2022 04.
Article in English | MEDLINE | ID: mdl-35072519
16.
Resuscitation ; 164: 40-45, 2021 07.
Article in English | MEDLINE | ID: mdl-34004263

ABSTRACT

INTRODUCTION: Maternal mortality has risen in the United States during the 21st century. Factors influencing outcome of maternal cardiac arrest (MCA) remain largely unexplored. OBJECTIVE: We sought to further elucidate the factors affecting maternal death from in-hospital (IH) MCA. METHODS: Our query of the American Heart Association's GWTG®-Resuscitation voluntary registry from 2000-2017 revealed 561 index cases of IH MCA with complete outcome data. Logistic regression was performed using hospital death as the primary outcome and included variables with a p value = 0.1 or less based upon univariate analysis. Age, race, year of arrest, pre-existing conditions, first documented pulseless rhythm and location of arrest were used in the model. Sensitivity analyses and assessment of variable interaction were also performed to test model stability. Institutional review deemed this research exempt from ethical approval. RESULTS: Among 561 cases of MCA, 57.2% (321/561) did not survive to hospital discharge. IH death was not associated with maternal age, race and year of event. In the final model, IH death was significantly associated with pre-arrest hypotension/hypoperfusion (OR = 1.80 (95% CI, 1.16-2.79); p = 0.009). The occurrence of MCA outside of the delivery suite (referent group) or operating room was associated with a significantly higher risk of death: ICU/Post-Anesthesia Care Unit (PACU) (OR = 3.32 (95% CI, 2.00-5.52); p < 0.001) and ER/other (OR = 1.89 (95% CI, 1.15-3.11); p = 0.012). While MCA cases with a shockable vs. non-shockable first documented pulseless rhythm had similar outcomes, those with an indeterminate rhythm were less likely to die, (OR = 0.41(95% CI, 0.20-0.84); p = 0.014). In a sensitivity analysis, removal of the indeterminate group did not alter outcomes regarding first documented pulseless rhythm or arrest location. Area under the curve for the final model was 0.715 (95% CI 0.673-0.757). CONCLUSIONS: Our study identified several novel factors associated with IH death of our MCA cohort. More research is required to further understand the pathophysiologic dynamics affecting outcomes of IH MCA in this unique population.


Subject(s)
Cardiopulmonary Resuscitation , Heart Arrest , Out-of-Hospital Cardiac Arrest , Electric Countershock , Heart Arrest/therapy , Hospitals , Humans , Registries , United States/epidemiology
17.
Crit Care Med ; 49(7): e673-e682, 2021 07 01.
Article in English | MEDLINE | ID: mdl-33861547

ABSTRACT

OBJECTIVES: Recent sepsis studies have defined patients as "infected" using a combination of culture and antibiotic orders rather than billing data. However, the accuracy of these definitions is unclear. We aimed to compare the accuracy of different established criteria for identifying infected patients using detailed chart review. DESIGN: Retrospective observational study. SETTING: Six hospitals from three health systems in Illinois. PATIENTS: Adult admissions with blood culture or antibiotic orders, or Angus International Classification of Diseases infection codes and death were eligible for study inclusion as potentially infected patients. Nine-hundred to 1,000 of these admissions were randomly selected from each health system for chart review, and a proportional number of patients who did not meet chart review eligibility criteria were also included and deemed not infected. INTERVENTIONS: None. MEASUREMENTS AND MAIN RESULTS: The accuracy of published billing code criteria by Angus et al and electronic health record criteria by Rhee et al and Seymour et al (Sepsis-3) was determined using the manual chart review results as the gold standard. A total of 5,215 patients were included, with 2,874 encounters analyzed via chart review and a proportional 2,341 added who did not meet chart review eligibility criteria. In the study cohort, 27.5% of admissions had at least one infection. This was most similar to the percentage of admissions with blood culture orders (26.8%), Angus infection criteria (28.7%), and the Sepsis-3 criteria (30.4%). Sepsis-3 criteria was the most sensitive (81%), followed by Angus (77%) and Rhee (52%), while Rhee (97%) and Angus (90%) were more specific than the Sepsis-3 criteria (89%). Results were similar for patients with organ dysfunction during their admission. CONCLUSIONS: Published criteria have a wide range of accuracy for identifying infected patients, with the Sepsis-3 criteria being the most sensitive and Rhee criteria being the most specific. These findings have important implications for studies investigating the burden of sepsis on a local and national level.


Subject(s)
Data Accuracy , Electronic Health Records/standards , Infections/epidemiology , Information Storage and Retrieval/methods , Adult , Aged , Anti-Bacterial Agents/therapeutic use , Antibiotic Prophylaxis/statistics & numerical data , Blood Culture , Chicago/epidemiology , False Positive Reactions , Female , Humans , Infections/diagnosis , International Classification of Diseases , Male , Middle Aged , Organ Dysfunction Scores , Patient Admission/statistics & numerical data , Prevalence , Retrospective Studies , Sensitivity and Specificity , Sepsis/diagnosis
18.
J Am Coll Emerg Physicians Open ; 1(4): 321-326, 2020 Aug.
Article in English | MEDLINE | ID: mdl-33000054

ABSTRACT

In-hospital cardiac arrest remains a leading cause of death: roughly 300,000 in-hospital cardiac arrests occur each year in the United States, ≈10% of which occur in the emergency department. ED-based cardiac arrest may represent a subset of in-hospital cardiac arrest with a higher proportion of reversible etiologies and a higher potential for neurologically intact survival. Patients presenting to the ED have become increasingly complex, have a high burden of critical illness, and face crowded departments with thinly stretched resources. As a result, patients in the ED are vulnerable to unrecognized clinical deterioration that may lead to ED-based cardiac arrest. Efforts to identify patients who may progress to ED-based cardiac arrest have traditionally been approached through identification of critically ill patients at triage and the identification of patients who unexpectedly deteriorate during their stay in the ED. Interventions to facilitate appropriate triage and resource allocation, as well as earlier identification of patients at risk of deterioration in the ED, could potentially allow for both prevention of cardiac arrest and optimization of outcomes from ED-based cardiac arrest. This review will discuss the epidemiology of ED-based cardiac arrest, as well as commonly used approaches to predict ED-based cardiac arrest and highlight areas that require further research to improve outcomes for this population.

19.
JAMA Netw Open ; 3(8): e2012892, 2020 08 03.
Article in English | MEDLINE | ID: mdl-32780123

ABSTRACT

Importance: Acute kidney injury (AKI) is associated with increased morbidity and mortality in hospitalized patients. Current methods to identify patients at high risk of AKI are limited, and few prediction models have been externally validated. Objective: To internally and externally validate a machine learning risk score to detect AKI in hospitalized patients. Design, Setting, and Participants: This diagnostic study included 495 971 adult hospital admissions at the University of Chicago (UC) from 2008 to 2016 (n = 48 463), at Loyola University Medical Center (LUMC) from 2007 to 2017 (n = 200 613), and at NorthShore University Health System (NUS) from 2006 to 2016 (n = 246 895) with serum creatinine (SCr) measurements. Patients with an SCr concentration at admission greater than 3.0 mg/dL, with a prior diagnostic code for chronic kidney disease stage 4 or higher, or who received kidney replacement therapy within 48 hours of admission were excluded. A simplified version of a previously published gradient boosted machine AKI prediction algorithm was used; it was validated internally among patients at UC and externally among patients at NUS and LUMC. Main Outcomes and Measures: Prediction of Kidney Disease Improving Global Outcomes SCr-defined stage 2 AKI within a 48-hour interval was the primary outcome. Discrimination was assessed by the area under the receiver operating characteristic curve (AUC). Results: The study included 495 971 adult admissions (mean [SD] age, 63 [18] years; 87 689 [17.7%] African American; and 266 866 [53.8%] women) across 3 health systems. The development of stage 2 or higher AKI occurred in 15 664 of 48 463 patients (3.4%) in the UC cohort, 5711 of 200 613 (2.8%) in the LUMC cohort, and 3499 of 246 895 (1.4%) in the NUS cohort. In the UC cohort, 332 patients (0.7%) required kidney replacement therapy compared with 672 patients (0.3%) in the LUMC cohort and 440 patients (0.2%) in the NUS cohort. The AUCs for predicting at least stage 2 AKI in the next 48 hours were 0.86 (95% CI, 0.86-0.86) in the UC cohort, 0.85 (95% CI, 0.84-0.85) in the LUMC cohort, and 0.86 (95% CI, 0.86-0.86) in the NUS cohort. The AUCs for receipt of kidney replacement therapy within 48 hours were 0.96 (95% CI, 0.96-0.96) in the UC cohort, 0.95 (95% CI, 0.94-0.95) in the LUMC cohort, and 0.95 (95% CI, 0.94-0.95) in the NUS cohort. In time-to-event analysis, a probability cutoff of at least 0.057 predicted the onset of stage 2 AKI a median (IQR) of 27 (6.5-93) hours before the eventual doubling in SCr concentrations in the UC cohort, 34.5 (19-85) hours in the NUS cohort, and 39 (19-108) hours in the LUMC cohort. Conclusions and Relevance: In this study, the machine learning algorithm demonstrated excellent discrimination in both internal and external validation, supporting its generalizability and potential as a clinical decision support tool to improve AKI detection and outcomes.


Subject(s)
Acute Kidney Injury/diagnosis , Acute Kidney Injury/epidemiology , Machine Learning , Risk Assessment/methods , Adult , Aged , Aged, 80 and over , Female , Humans , Male , Middle Aged , Models, Statistical , ROC Curve , Retrospective Studies , Risk Factors
SELECTION OF CITATIONS
SEARCH DETAIL
...