RESUMEN
BACKGROUND: There are no instruments that can identify patients at an increased risk of poor outcomes after liver transplantation (LT) based only on their preoperative characteristics. The primary aim of this study was to develop such a scoring system. Secondary outcomes were to assess the discriminative performance of the predictive model for 90-day mortality, 1-year mortality, and 5-year patient survival. METHODS: The study population was represented by 30 458 adults who underwent LT in the United States between January 2002 and June 2013. Machine learning techniques identified recipient age, Model for End-Stage Liver Disease score, body mass index, diabetes, and dialysis before LT as the strongest predictors for 90-day postoperative mortality. A weighted scoring system (minimum of 0 to a maximum of 6 points) was subsequently developed. RESULTS: Recipients with 0, 1, 2, 3, 4, 5, and 6 points had an observed 90-day mortality of 6.0%, 8.7%, 10.4%, 11.9%, 15.7%, 16.0%, and 19.7%, respectively (P ≤ 0.001). One-year mortality was 9.8%, 13.4%, 15.8%, 17.2%, 23.0%, 25.2%, and 35.8% (P ≤ 0.001) and five-year survival was 78%, 73%, 72%, 71%, 65%, 59%, and 48%, respectively (P = 0.001). The mean 90-day mortality for the cohort was 9%. The area under the curve of the model was 0.952 for the discrimination of patients with 90-day mortality risk ≥10%. CONCLUSIONS: Short- and long-term outcomes of patients undergoing cadaveric LT can be predicted using a scoring system based on recipients' preoperative characteristics. This tool could assist clinicians and researchers in identifying patients at increased risks of postoperative death.