ABSTRACT
BACKGROUND: Weaning from mechanical ventilation (MV) is an essential issue in critically ill patients, and we used an explainable machine learning (ML) approach to establish an extubation prediction model. METHODS: We enrolled patients who were admitted to intensive care units during 2015-2019 at Taichung Veterans General Hospital, a referral hospital in central Taiwan. We used five ML models, including extreme gradient boosting (XGBoost), categorical boosting (CatBoost), light gradient boosting machine (LightGBM), random forest (RF) and logistic regression (LR), to establish the extubation prediction model, and the feature window as well as prediction window was 48 h and 24 h, respectively. We further employed feature importance, Shapley additive explanations (SHAP) plot, partial dependence plot (PDP) and local interpretable model-agnostic explanations (LIME) for interpretation of the model at the domain, feature, and individual levels. RESULTS: We enrolled 5,940 patients and found the accuracy was comparable among XGBoost, LightGBM, CatBoost and RF, with the area under the receiver operating characteristic curve using XGBoost to predict extubation was 0.921. The calibration and decision curve analysis showed well applicability of models. We also used the SHAP summary plot and PDP plot to demonstrate discriminative points of six key features in predicting extubation. Moreover, we employed LIME and SHAP force plots to show predicted probabilities of extubation and the rationale of the prediction at the individual level. CONCLUSIONS: We developed an extubation prediction model with high accuracy and visualised explanations aligned with clinical workflow, and the model may serve as an autonomous screen tool for timely weaning.
Subject(s)
Airway Extubation , Critical Illness , Humans , Retrospective Studies , Critical Illness/therapy , Respiration, Artificial , Taiwan , Machine LearningABSTRACT
BACKGROUND: Machine learning (ML) model is increasingly used to predict short-term outcome in critically ill patients, but the study for long-term outcome is sparse. We used explainable ML approach to establish 30-day, 90-day and 1-year mortality prediction model in critically ill ventilated patients. METHODS: We retrospectively included patients who were admitted to intensive care units during 2015-2018 at a tertiary hospital in central Taiwan and linked with the Taiwanese nationwide death registration data. Three ML models, including extreme gradient boosting (XGBoost), random forest (RF) and logistic regression (LR), were used to establish mortality prediction model. Furthermore, we used feature importance, Shapley Additive exPlanations (SHAP) plot, partial dependence plot (PDP), and local interpretable model-agnostic explanations (LIME) to explain the established model. RESULTS: We enrolled 6994 patients and found the accuracy was similar among the three ML models, and the area under the curve value of using XGBoost to predict 30-day, 90-day and 1-year mortality were 0.858, 0.839 and 0.816, respectively. The calibration curve and decision curve analysis further demonstrated accuracy and applicability of models. SHAP summary plot and PDP plot illustrated the discriminative point of APACHE (acute physiology and chronic health exam) II score, haemoglobin and albumin to predict 1-year mortality. The application of LIME and SHAP force plots quantified the probability of 1-year mortality and algorithm of key features at individual patient level. CONCLUSIONS: We used an explainable ML approach, mainly XGBoost, SHAP and LIME plots to establish an explainable 1-year mortality prediction ML model in critically ill ventilated patients.