Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 68
Filter
1.
Artif Intell Med ; 154: 102899, 2024 May 24.
Article in English | MEDLINE | ID: mdl-38843692

ABSTRACT

Predictive modeling is becoming an essential tool for clinical decision support, but health systems with smaller sample sizes may construct suboptimal or overly specific models. Models become over-specific when beside true physiological effects, they also incorporate potentially volatile site-specific artifacts. These artifacts can change suddenly and can render the model unsafe. To obtain safer models, health systems with inadequate sample sizes may adopt one of the following options. First, they can use a generic model, such as one purchased from a vendor, but often such a model is not sufficiently specific to the patient population and is thus suboptimal. Second, they can participate in a research network. Paradoxically though, sites with smaller datasets contribute correspondingly less to the joint model, again rendering the final model suboptimal. Lastly, they can use transfer learning, starting from a model trained on a large data set and updating this model to the local population. This strategy can also result in a model that is over-specific. In this paper we present the consensus modeling paradigm, which uses the help of a large site (source) to reach a consensus model at the small site (target). We evaluate the approach on predicting postoperative complications at two health systems with 9,044 and 38,045 patients (rare outcomes at about 1% positive rate), and conduct a simulation study to understand the performance of consensus modeling relative to the other three approaches as a function of the available training sample size at the target site. We found that consensus modeling exhibited the least over-specificity at either the source or target site and achieved the highest combined predictive performance.

2.
Stud Health Technol Inform ; 310: 1378-1379, 2024 Jan 25.
Article in English | MEDLINE | ID: mdl-38269655

ABSTRACT

Prolonged QT interval is an independent risk factor for all-cause mortality. However, evaluation of mortality associated to the implementation of a clinical decision support system to increase awareness and provide management recommendations has been challenging. Here we present our attempt to develop a model using only electronic data and different control groups.


Subject(s)
Decision Support Systems, Clinical , Humans , Control Groups , Patients , Risk Factors
3.
Stud Health Technol Inform ; 310: 219-223, 2024 Jan 25.
Article in English | MEDLINE | ID: mdl-38269797

ABSTRACT

Recurrent AKI has been found common among hospitalized patients after discharge, and early prediction may allow timely intervention and optimized post-discharge treatment [1]. There are significant gaps in the literature regarding the risk prediction on the post-AKI population, and most current works only included a limited number of pre-selected variables [2]. In this study, we built and compared machine learning models using both knowledge-based and data-driven features in predicting the risk of recurrent AKI within 1-year of discharge. Our results showed that the additional use of data-driven features statistically improved the model performances, with best AUC=0.766 by using logistic regression.


Subject(s)
Acute Kidney Injury , Patient Discharge , Adult , Humans , Aftercare , Machine Learning , Hospitals , Acute Kidney Injury/diagnosis
4.
JAMA Netw Open ; 6(7): e2324176, 2023 07 03.
Article in English | MEDLINE | ID: mdl-37486632

ABSTRACT

Importance: The Deterioration Index (DTI), used by hospitals for predicting patient deterioration, has not been extensively validated externally, raising concerns about performance and equitable predictions. Objective: To locally validate DTI performance and assess its potential for bias in predicting patient clinical deterioration. Design, Setting, and Participants: This retrospective prognostic study included 13 737 patients admitted to 8 heterogenous Midwestern US hospitals varying in size and type, including academic, community, urban, and rural hospitals. Patients were 18 years or older and admitted between January 1 and May 31, 2021. Exposure: DTI predictions made every 15 minutes. Main Outcomes and Measures: Deterioration, defined as the occurrence of any of the following while hospitalized: mechanical ventilation, intensive care unit transfer, or death. Performance of the DTI was evaluated using area under the receiver operating characteristic curve (AUROC) and area under the precision recall curve (AUPRC). Bias measures were calculated across demographic subgroups. Results: A total of 5 143 513 DTI predictions were made for 13 737 patients across 14 834 hospitalizations. Among 13 918 encounters, the mean (SD) age of patients was 60.3 (19.2) years; 7636 (54.9%) were female, 11 345 (81.5%) were White, and 12 392 (89.0%) were of other ethnicity than Hispanic or Latino. The prevalence of deterioration was 10.3% (n = 1436). The DTI produced AUROCs of 0.759 (95% CI, 0.756-0.762) at the observation level and 0.685 (95% CI, 0.671-0.700) at the encounter level. Corresponding AUPRCs were 0.039 (95% CI, 0.037-0.040) at the observation level and 0.248 (95% CI, 0.227-0.273) at the encounter level. Bias measures varied across demographic subgroups and were 14.0% worse for patients identifying as American Indian or Alaska Native and 19.0% worse for those who chose not to disclose their ethnicity. Conclusions and Relevance: In this prognostic study, the DTI had modest ability to predict patient deterioration, with varying degrees of performance at the observation and encounter levels and across different demographic groups. Disparate performance across subgroups suggests the need for more transparency in model training data and reinforces the need to locally validate externally developed prediction models.


Subject(s)
Ethnicity , Hospitalization , Humans , Adult , Female , Middle Aged , Male , Retrospective Studies , Prognosis , Hospitals
5.
Addict Behav ; 141: 107657, 2023 06.
Article in English | MEDLINE | ID: mdl-36796176

ABSTRACT

Controversy surrounding the use of opioids for the treatment and the unique characteristics of chronic pain heighten the risks for abuse and dependence; however, it's unclear if higher doses of opioids and first exposure are associated with dependence and abuse. This study aimed to identify patients who developed dependence or opioid abuse after exposed to opioids for the first time and what were the risks factors associated with the outcome. A retrospective observational cohort study analyzed 2,411 patients between 2011 and 2017 who had a diagnosis of chronic pain and received opioids for the first time. A logistic regression model was used to estimate the likelihood of opioid dependence/abuse after the first exposure based on their mental health conditions, prior substance abuse disorders, demographics, and the amount of MME per day patients received. From 2,411 patients, 5.5 % of the patients had a diagnosis of dependence or abuse after the first exposure. Patients who were depressed (OR = 2.09), previous non-opioid substance dependence or abuse (OR = 1.59) or received greater than 50 MME per day (OR = 1.03) showed statistically significant relationship with developing opioid dependence or abuse, while age (OR = -1.03) showed to be a protective factor. Further studies should stratify chronic pain patients into groups who is in higher risk in developing opioid dependence or abuse and develop alternative strategies for pain management and treatments beyond opioids. This study reinforces the psychosocial problems as determinants of opioid dependence or abuse and risk factors, and the need for safer opioid prescribing practices.


Subject(s)
Chronic Pain , Opioid-Related Disorders , Humans , Analgesics, Opioid/therapeutic use , Chronic Pain/drug therapy , Retrospective Studies , Practice Patterns, Physicians' , Opioid-Related Disorders/drug therapy , Risk Factors
6.
IEEE J Biomed Health Inform ; 26(11): 5728-5737, 2022 11.
Article in English | MEDLINE | ID: mdl-36006882

ABSTRACT

A cornerstone of clinical medicine is intervening on a continuous exposure, such as titrating the dosage of a pharmaceutical or controlling a laboratory result. In clinical trials, continuous exposures are dichotomized into narrow ranges, excluding large portions of the realistic treatment scenarios. The existing computational methods for estimating the effect of continuous exposure rely on a set of strict assumptions. We introduce new methods that are more robust towards violations of these assumptions. Our methods are based on the key observation that changes of exposure in the clinical setting are often achieved gradually, so effect estimates must be "locally" robust in narrower exposure ranges. We compared our methods with several existing methods on three simulated studies with increasing complexity. We also applied the methods to data from 14 k sepsis patients at M Health Fairview to estimate the effect of antibiotic administration latency on prolonged hospital stay. The proposed methods achieve good performance in all simulation studies. When the assumptions were violated, the proposed methods had estimation errors of one half to one fifth of the state-of-the-art methods. Applying our methods to the sepsis cohort resulted in effect estimates consistent with clinical knowledge.


Subject(s)
Sepsis , Humans , Computer Simulation , Cohort Studies , Sepsis/diagnosis
7.
Neuroimage Clin ; 35: 103077, 2022.
Article in English | MEDLINE | ID: mdl-35696810

ABSTRACT

Our goal was to understand the complex relationship between age, sex, midlife risk factors, and early white matter changes measured by diffusion tensor imaging (DTI) and their role in the evolution of longitudinal white matter hyperintensities (WMH). We identified 1564 participants (1396 cognitively unimpaired, 151 mild cognitive impairment and 17 dementia participants) with age ranges of 30-90 years from the population-based sample of Mayo Clinic Study of Aging. We used computational causal structure discovery and regression analyses to evaluate the predictors of WMH and DTI, and to ascertain the mediating effect of DTI on WMH. We further derived causal graphs to understand the complex interrelationships between midlife protective factors, vascular risk factors, diffusion changes, and WMH. Older age, female sex, and hypertension were associated with higher baseline and progression of WMH as well as DTI measures (P ≤ 0.003). The effects of hypertension and sex on WMH were partially mediated by microstructural changes measured on DTI. Higher midlife physical activity was predictive of lower WMH through a direct impact on better white matter tract integrity as well as an indirect effect through reducing the risk of hypertension by lowering BMI. This study identified key risks factors, early brain changes, and pathways that may lead to the evolution of WMH.


Subject(s)
Hypertension , White Matter , Adult , Aged , Aged, 80 and over , Biomarkers/metabolism , Brain/diagnostic imaging , Diffusion Tensor Imaging/methods , Female , Humans , Magnetic Resonance Imaging/methods , Middle Aged , Risk Factors
8.
Crit Care Med ; 50(5): 799-809, 2022 05 01.
Article in English | MEDLINE | ID: mdl-34974496

ABSTRACT

OBJECTIVES: Sepsis remains a leading and preventable cause of hospital utilization and mortality in the United States. Despite updated guidelines, the optimal definition of sepsis as well as optimal timing of bundled treatment remain uncertain. Identifying patients with infection who benefit from early treatment is a necessary step for tailored interventions. In this study, we aimed to illustrate clinical predictors of time-to-antibiotics among patients with severe bacterial infection and model the effect of delay on risk-adjusted outcomes across different sepsis definitions. DESIGN: A multicenter retrospective observational study. SETTING: A seven-hospital network including academic tertiary care center. PATIENTS: Eighteen thousand three hundred fifteen patients admitted with severe bacterial illness with or without sepsis by either acute organ dysfunction (AOD) or systemic inflammatory response syndrome positivity. MEASUREMENTS AND MAIN RESULTS: The primary exposure was time to antibiotics. We identified patient predictors of time-to-antibiotics including demographics, chronic diagnoses, vitals, and laboratory results and determined the impact of delay on a composite of inhospital death or length of stay over 10 days. Distribution of time-to-antibiotics was similar across patients with and without sepsis. For all patients, a J-curve relationship between time-to-antibiotics and outcomes was observed, primarily driven by length of stay among patients without AOD. Patient characteristics provided good to excellent prediction of time-to-antibiotics irrespective of the presence of sepsis. Reduced time-to-antibiotics was associated with improved outcomes for all time points beyond 2.5 hours from presentation across sepsis definitions. CONCLUSIONS: Antibiotic timing is a function of patient factors regardless of sepsis criteria. Similarly, we show that early administration of antibiotics is associated with improved outcomes in all patients with severe bacterial illness. Our findings suggest identifying infection is a rate-limiting and actionable step that can improve outcomes in septic and nonseptic patients.


Subject(s)
Bacterial Infections , Sepsis , Shock, Septic , Anti-Bacterial Agents/therapeutic use , Bacterial Infections/drug therapy , Hospital Mortality , Hospitalization , Humans , Retrospective Studies , United States
9.
Ann Surg ; 276(1): 180-185, 2022 07 01.
Article in English | MEDLINE | ID: mdl-33074897

ABSTRACT

OBJECTIVE: To demonstrate that a semi-automated approach to health data abstraction provides significant efficiencies and high accuracy. BACKGROUND: Surgical outcome abstraction remains laborious and a barrier to the sustainment of quality improvement registries like ACS-NSQIP. A supervised machine learning algorithm developed for detecting SSi using structured and unstructured electronic health record data was tested to perform semi-automated SSI abstraction. METHODS: A Lasso-penalized logistic regression model with 2011-3 data was trained (baseline performance measured with 10-fold cross-validation). A cutoff probability score from the training data was established, dividing the subsequent evaluation dataset into "negative" and "possible" SSI groups, with manual data abstraction only performed on the "possible" group. We evaluated performance on data from 2014, 2015, and both years. RESULTS: Overall, 6188 patients were in the 2011-3 training dataset and 5132 patients in the 2014-5 evaluation dataset. With use of the semi-automated approach, applying the cut-off score decreased the amount of manual abstraction by >90%, resulting in < 1% false negatives in the "negative" group and a sensitivity of 82%. A blinded review of 10% of the "possible" group, considering only the features selected by the algorithm, resulted in high agreement with the gold standard based on full chart abstraction, pointing towards additional efficiency in the abstraction process by making it possible for abstractors to review limited, salient portions of the chart. CONCLUSION: Semi-automated machine learning-aided SSI abstraction greatly accelerates the abstraction process and achieves very good performance. This could be translated to other post-operative outcomes and reduce cost barriers for wider ACS-NSQIP adoption.


Subject(s)
Machine Learning , Surgical Wound Infection , Algorithms , Electronic Health Records , Humans , Quality Improvement , Surgical Wound Infection/diagnosis
10.
AMIA Annu Symp Proc ; 2022: 1227-1236, 2022.
Article in English | MEDLINE | ID: mdl-37128413

ABSTRACT

Remdesivir has been widely used for the treatment of Coronavirus (COVID) in hospitalized patients, but its nephrotoxicity is still under investigation1. Given the paucity of knowledge regarding the mechanism and optimal treatment of the development of acute kidney injury (AKI) in the setting of COVID, we analyzed the role of remdesivir and built multifactorial causal models of COVID-AKI by applying causal discovery machine learning techniques. Risk factors of COVID-AKI and renal function measures were represented in a temporal sequence using longitudinal data from EHR. Our models successfully recreated known causal pathways to changes in renal function and interactions with each other and examined the consistency of high-level causal relationships over a 4-day course of remdesivir. Results indicated a need for assessment of renal function on day 2 and 3 use of remdesivir, while uncovering that remdesivir may pose less risk to AKI than existing conditions of chronic kidney disease.


Subject(s)
Acute Kidney Injury , COVID-19 , Drug-Related Side Effects and Adverse Reactions , Humans , SARS-CoV-2 , COVID-19 Drug Treatment , Acute Kidney Injury/etiology
11.
J Patient Saf ; 18(4): 287-294, 2022 06 01.
Article in English | MEDLINE | ID: mdl-34569998

ABSTRACT

OBJECTIVES: The COVID-19 pandemic stressed hospital operations, requiring rapid innovations to address rise in demand and specialized COVID-19 services while maintaining access to hospital-based care and facilitating expertise. We aimed to describe a novel hospital system approach to managing the COVID-19 pandemic, including multihospital coordination capability and transfer of COVID-19 patients to a single, dedicated hospital. METHODS: We included patients who tested positive for SARS-CoV-2 by polymerase chain reaction admitted to a 12-hospital network including a dedicated COVID-19 hospital. Our primary outcome was adherence to local guidelines, including admission risk stratification, anticoagulation, and dexamethasone treatment assessed by differences-in-differences analysis after guideline dissemination. We evaluated outcomes and health care worker satisfaction. Finally, we assessed barriers to safe transfer including transfer across different electronic health record systems. RESULTS: During the study, the system admitted a total of 1209 patients. Of these, 56.3% underwent transfer, supported by a physician-led System Operations Center. Patients who were transferred were older (P = 0.001) and had similar risk-adjusted mortality rates. Guideline adherence after dissemination was higher among patients who underwent transfer: admission risk stratification (P < 0.001), anticoagulation (P < 0.001), and dexamethasone administration (P = 0.003). Transfer across electronic health record systems was a perceived barrier to safety and reduced quality. Providers positively viewed our transfer approach. CONCLUSIONS: With standardized communication, interhospital transfers can be a safe and effective method of cohorting COVID-19 patients, are well received by health care providers, and have the potential to improve care quality.


Subject(s)
COVID-19 , Anticoagulants/therapeutic use , COVID-19/epidemiology , Dexamethasone/therapeutic use , Humans , Pandemics , SARS-CoV-2
12.
Stud Health Technol Inform ; 284: 209-214, 2021 Dec 15.
Article in English | MEDLINE | ID: mdl-34920510

ABSTRACT

This study aims to analyze how access to care influences patient mortality rates after liver transplants in adults by analyzing the relationships between insurance coverage, income, geographic location, and mortality rates post-transplantation. It was hypothesized that a sociodemographic variable, such as insurance type, geographical location, and income level would impact mortality rates post-liver transplant. Results showed that unknown insurance coverage increased the likelihood of mortality post-transplant, income level was not found to be a significant indicator, and patients living in the Northeast region of the United States were more likely to die post-liver transplant.


Subject(s)
Liver Transplantation , Health Services Accessibility , Humans
13.
J Am Med Inform Assoc ; 29(1): 72-79, 2021 12 28.
Article in English | MEDLINE | ID: mdl-34963141

ABSTRACT

OBJECTIVE: Hospital-acquired infections (HAIs) are associated with significant morbidity, mortality, and prolonged hospital length of stay. Risk prediction models based on pre- and intraoperative data have been proposed to assess the risk of HAIs at the end of the surgery, but the performance of these models lag behind HAI detection models based on postoperative data. Postoperative data are more predictive than pre- or interoperative data since it is closer to the outcomes in time, but it is unavailable when the risk models are applied (end of surgery). The objective is to study whether such data, which is temporally unavailable at prediction time (TUP) (and thus cannot directly enter the model), can be used to improve the performance of the risk model. MATERIALS AND METHODS: An extensive array of 12 methods based on logistic/linear regression and deep learning were used to incorporate the TUP data using a variety of intermediate representations of the data. Due to the hierarchical structure of different HAI outcomes, a comparison of single and multi-task learning frameworks is also presented. RESULTS AND DISCUSSION: The use of TUP data was always advantageous as baseline methods, which cannot utilize TUP data, never achieved the top performance. The relative performances of the different models vary across the different outcomes. Regarding the intermediate representation, we found that its complexity was key and that incorporating label information was helpful. CONCLUSIONS: Using TUP data significantly helped predictive performance irrespective of the model complexity.


Subject(s)
Cross Infection , Cross Infection/epidemiology , Hospitals , Humans , Logistic Models , Morbidity
14.
Sci Rep ; 11(1): 21025, 2021 10 25.
Article in English | MEDLINE | ID: mdl-34697394

ABSTRACT

Modern AI-based clinical decision support models owe their success in part to the very large number of predictors they use. Safe and robust decision support, especially for intervention planning, requires causal, not associative, relationships. Traditional methods of causal discovery, clinical trials and extracting biochemical pathways, are resource intensive and may not scale up to the number and complexity of relationships sufficient for precision treatment planning. Computational causal structure discovery (CSD) from electronic health records (EHR) data can represent a solution, however, current CSD methods fall short on EHR data. This paper presents a CSD method tailored to the EHR data. The application of the proposed methodology was demonstrated on type-2 diabetes mellitus. A large EHR dataset from Mayo Clinic was used as development cohort, and another large dataset from an independent health system, M Health Fairview, as external validation cohort. The proposed method achieved very high recall (.95) and substantially higher precision than the general-purpose methods (.84 versus .29, and .55). The causal relationships extracted from the development and external validation cohorts had a high (81%) overlap. Due to the adaptations to EHR data, the proposed method is more suitable for use in clinical decision support than the general-purpose methods.


Subject(s)
Diabetes Mellitus, Type 2/epidemiology , Electronic Health Records/statistics & numerical data , Algorithms , Cohort Studies , Diabetes Mellitus, Type 2/diagnosis , Diabetes Mellitus, Type 2/etiology , Disease Susceptibility , Humans , Machine Learning , Models, Statistical , Public Health Surveillance , Reproducibility of Results , Retrospective Studies , Risk Assessment , Risk Factors , Workflow
15.
JAMIA Open ; 4(3): ooab055, 2021 Jul.
Article in English | MEDLINE | ID: mdl-34350391

ABSTRACT

OBJECTIVE: Ensuring an efficient response to COVID-19 requires a degree of inter-system coordination and capacity management coupled with an accurate assessment of hospital utilization including length of stay (LOS). We aimed to establish optimal practices in inter-system data sharing and LOS modeling to support patient care and regional hospital operations. MATERIALS AND METHODS: We completed a retrospective observational study of patients admitted with COVID-19 followed by 12-week prospective validation, involving 36 hospitals covering the upper Midwest. We developed a method for sharing de-identified patient data across systems for analysis. From this, we compared 3 approaches, generalized linear model (GLM) and random forest (RF), and aggregated system level averages to identify features associated with LOS. We compared model performance by area under the ROC curve (AUROC). RESULTS: A total of 2068 patients were included and used for model derivation and 597 patients for validation. LOS overall had a median of 5.0 days and mean of 8.2 days. Consistent predictors of LOS included age, critical illness, oxygen requirement, weight loss, and nursing home admission. In the validation cohort, the RF model (AUROC 0.890) and GLM model (AUROC 0.864) achieved good to excellent prediction of LOS, but only marginally better than system averages in practice. CONCLUSION: Regional sharing of patient data allowed for effective prediction of LOS across systems; however, this only provided marginal improvement over hospital averages at the aggregate level. A federated approach of sharing aggregated system capacity and average LOS will likely allow for effective capacity management at the regional level.

16.
PLoS One ; 16(7): e0253696, 2021.
Article in English | MEDLINE | ID: mdl-34242241

ABSTRACT

OBJECTIVE: The association of body mass index (BMI) and all-cause mortality is controversial, frequently referred to as a paradox. Whether the cause is metabolic factors or statistical biases is still controversial. We assessed the association of BMI and all-cause mortality considering a wide range of comorbidities and baseline mortality risk. METHODS: Retrospective cohort study of Olmsted County residents with at least one BMI measurement between 2000-2005, clinical data in the electronic health record and minimum 8 year follow-up or death within this time. The cohort was categorized based on baseline mortality risk: Low, Medium, Medium-high, High and Very-high. All-cause mortality was assessed for BMI intervals of 5 and 0.5 Kg/m2. RESULTS: Of 39,739 subjects (average age 52.6, range 18-89; 38.1% male) 11.86% died during 8-year follow-up. The 8-year all-cause mortality risk had a "U" shape with a flat nadir in all the risk groups. Extreme BMI showed higher risk (BMI <15 = 36.4%, 15 to <20 = 15.4% and ≥45 = 13.7%), while intermediate BMI categories showed a plateau between 10.6 and 12.5%. The increased risk attributed to baseline risk and comorbidities was more obvious than the risk based on BMI increase within the same risk groups. CONCLUSIONS: There is a complex association between BMI and all-cause mortality when evaluated including comorbidities and baseline mortality risk. In general, comorbidities are better predictors of mortality risk except at extreme BMIs. In patients with no or few comorbidities, BMI seems to better define mortality risk. Aggressive management of comorbidities may provide better survival outcome for patients with body mass between normal and moderate obesity.


Subject(s)
Body Mass Index , Comorbidity , Mortality , Adolescent , Adult , Aged , Aged, 80 and over , Electronic Health Records/statistics & numerical data , Female , Follow-Up Studies , Humans , Male , Middle Aged , Minnesota/epidemiology , Retrospective Studies , Risk Assessment/methods , Risk Assessment/statistics & numerical data , Risk Factors , Young Adult
17.
IEEE J Biomed Health Inform ; 25(7): 2476-2486, 2021 07.
Article in English | MEDLINE | ID: mdl-34129510

ABSTRACT

Diseases can show different courses of progression even when patients share the same risk factors. Recent studies have revealed that the use of trajectories, the order in which diseases manifest throughout life, can be predictive of the course of progression. In this study, we propose a novel computational method for learning disease trajectories from EHR data. The proposed method consists of three parts: first, we propose an algorithm for extracting trajectories from EHR data; second, three criteria for filtering trajectories; and third, a likelihood function for assessing the risk of developing a set of outcomes given a trajectory set. We applied our methods to extract a set of disease trajectories from Mayo Clinic EHR data and evaluated it internally based on log-likelihood, which can be interpreted as the trajectories' ability to explain the observed (partial) disease progressions. We then externally evaluated the trajectories on EHR data from an independent health system, M Health Fairview. The proposed algorithm extracted a comprehensive set of disease trajectories that can explain the observed outcomes substantially better than competing methods and the proposed filtering criteria selected a small subset of disease trajectories that are highly interpretable and suffered only a minimal (relative 5%) loss of the ability to explain disease progression in both the internal and external validation.


Subject(s)
Algorithms , Electronic Health Records , Humans
18.
J Am Med Inform Assoc ; 28(9): 1885-1891, 2021 08 13.
Article in English | MEDLINE | ID: mdl-34151985

ABSTRACT

OBJECTIVE: In electronic health record data, the exact time stamp of major health events, defined by significant physiologic or treatment changes, is often missing. We developed and externally validated a method that can accurately estimate these time stamps based on accurate time stamps of related data elements. MATERIALS AND METHODS: A novel convolution-based change detection methodology was developed and tested using data from the national deidentified clinical claims OptumLabs data warehouse, then externally validated on a single center dataset derived from the M Health Fairview system. RESULTS: We applied the methodology to estimate time to liver transplantation for waitlisted candidates. The median error between estimated date within the period of the actual true date was zero days, and median error was 92% and 84% of the transplants, in development and validation samples, respectively. DISCUSSION: The proposed method can accurately estimate missing time stamps. Successful external validation suggests that the proposed method does not need to be refit to each health system; thus, it can be applied even when training data at the health system is insufficient or unavailable. The proposed method was applied to liver transplantation but can be more generally applied to any missing event that is accompanied by multiple related events that have accurate time stamps. CONCLUSION: Missing time stamps in electronic healthcare record data can be estimated using time stamps of related events. Since the model was developed on a nationally representative dataset, it could be successfully transferred to a local health system without substantial loss of accuracy.


Subject(s)
Electronic Health Records , Likelihood Functions
19.
J Am Coll Surg ; 232(6): 963-971.e1, 2021 06.
Article in English | MEDLINE | ID: mdl-33831539

ABSTRACT

BACKGROUND: Surgical complications have tremendous consequences and costs. Complication detection is important for quality improvement, but traditional manual chart review is burdensome. Automated mechanisms are needed to make this more efficient. To understand the generalizability of a machine learning algorithm between sites, automated surgical site infection (SSI) detection algorithms developed at one center were tested at another distinct center. STUDY DESIGN: NSQIP patients had electronic health record (EHR) data extracted at one center (University of Minnesota Medical Center, Site A) over a 4-year period for model development and internal validation, and at a second center (University of California San Francisco, Site B) over a subsequent 2-year period for external validation. Models for automated NSQIP SSI detection of superficial, organ space, and total SSI within 30 days postoperatively were validated using area under the curve (AUC) scores and corresponding 95% confidence intervals. RESULTS: For the 8,883 patients (Site A) and 1,473 patients (Site B), AUC scores were not statistically different for any outcome including superficial (external 0.804, internal [0.784, 0.874] AUC); organ/space (external 0.905, internal [0.867, 0.941] AUC); and total (external 0.855, internal [0.854, 0.908] AUC) SSI. False negative rates decreased with increasing case review volume and would be amenable to a strategy in which cases with low predicted probabilities of SSI could be excluded from chart review. CONCLUSIONS: Our findings demonstrated that SSI detection machine learning algorithms developed at 1 site were generalizable to another institution. SSI detection models are practically applicable to accelerate and focus chart review.


Subject(s)
Electronic Health Records/statistics & numerical data , Machine Learning , Medical Audit/methods , Quality Improvement , Surgical Wound Infection/diagnosis , Adult , Aged , Datasets as Topic , Female , Hospitals/statistics & numerical data , Humans , Male , Medical Audit/statistics & numerical data , Middle Aged , Risk Factors , Surgical Wound Infection/epidemiology
20.
AMIA Annu Symp Proc ; 2021: 1234-1243, 2021.
Article in English | MEDLINE | ID: mdl-35308921

ABSTRACT

Acute kidney injury (AKI) is potentially catastrophic and commonly seen among inpatients. In the United States, the quality of administrative coding data for capturing AKI accurately is questionable and needs to be updated. This retrospective study validated the quality of administrative coding for hospital-acquired AKI and explored the opportunities to improve the phenotyping performance by utilizing additional data sources from the electronic health record (EHR). A total of34570 patients were included, and overall prevalence of AKI based on the KDIGO reference standard was 10.13%, We obtained significantly different quality measures (sensitivity.-0.486, specificity:0.947, PPV.0.509, NPV:0.942 in the full cohort) of administrative coding from the previously reported ones in the U.S. Additional use of clinical notes by incorporating automatic NLP data extraction has been found to increase the AUC in phenotyping AKI, and AKI was better recognized in patients with heart failure, indicating disparities in the coding and management of AKI.


Subject(s)
Acute Kidney Injury , Acute Kidney Injury/diagnosis , Acute Kidney Injury/epidemiology , Adult , Cohort Studies , Hospitals , Humans , Inpatients , Retrospective Studies , Risk Factors
SELECTION OF CITATIONS
SEARCH DETAIL
...