Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 60
Filter
1.
NPJ Digit Med ; 7(1): 149, 2024 Jun 06.
Article in English | MEDLINE | ID: mdl-38844546

ABSTRACT

Malnutrition is a frequently underdiagnosed condition leading to increased morbidity, mortality, and healthcare costs. The Mount Sinai Health System (MSHS) deployed a machine learning model (MUST-Plus) to detect malnutrition upon hospital admission. However, in diverse patient groups, a poorly calibrated model may lead to misdiagnosis, exacerbating health care disparities. We explored the model's calibration across different variables and methods to improve calibration. Data from adult patients admitted to five MSHS hospitals from January 1, 2021 - December 31, 2022, were analyzed. We compared MUST-Plus prediction to the registered dietitian's formal assessment. Hierarchical calibration was assessed and compared between the recalibration sample (N = 49,562) of patients admitted between January 1, 2021 - December 31, 2022, and the hold-out sample (N = 17,278) of patients admitted between January 1, 2023 - September 30, 2023. Statistical differences in calibration metrics were tested using bootstrapping with replacement. Before recalibration, the overall model calibration intercept was -1.17 (95% CI: -1.20, -1.14), slope was 1.37 (95% CI: 1.34, 1.40), and Brier score was 0.26 (95% CI: 0.25, 0.26). Both weak and moderate measures of calibration were significantly different between White and Black patients and between male and female patients. Logistic recalibration significantly improved calibration of the model across race and gender in the hold-out sample. The original MUST-Plus model showed significant differences in calibration between White vs. Black patients. It also overestimated malnutrition in females compared to males. Logistic recalibration effectively reduced miscalibration across all patient subgroups. Continual monitoring and timely recalibration can improve model accuracy.

2.
Crit Care ; 28(1): 156, 2024 05 10.
Article in English | MEDLINE | ID: mdl-38730421

ABSTRACT

BACKGROUND: Current classification for acute kidney injury (AKI) in critically ill patients with sepsis relies only on its severity-measured by maximum creatinine which overlooks inherent complexities and longitudinal evaluation of this heterogenous syndrome. The role of classification of AKI based on early creatinine trajectories is unclear. METHODS: This retrospective study identified patients with Sepsis-3 who developed AKI within 48-h of intensive care unit admission using Medical Information Mart for Intensive Care-IV database. We used latent class mixed modelling to identify early creatinine trajectory-based classes of AKI in critically ill patients with sepsis. Our primary outcome was development of acute kidney disease (AKD). Secondary outcomes were composite of AKD or all-cause in-hospital mortality by day 7, and AKD or all-cause in-hospital mortality by hospital discharge. We used multivariable regression to assess impact of creatinine trajectory-based classification on outcomes, and eICU database for external validation. RESULTS: Among 4197 patients with AKI in critically ill patients with sepsis, we identified eight creatinine trajectory-based classes with distinct characteristics. Compared to the class with transient AKI, the class that showed severe AKI with mild improvement but persistence had highest adjusted risks for developing AKD (OR 5.16; 95% CI 2.87-9.24) and composite 7-day outcome (HR 4.51; 95% CI 2.69-7.56). The class that demonstrated late mild AKI with persistence and worsening had highest risks for developing composite hospital discharge outcome (HR 2.04; 95% CI 1.41-2.94). These associations were similar on external validation. CONCLUSIONS: These 8 classes of AKI in critically ill patients with sepsis, stratified by early creatinine trajectories, were good predictors for key outcomes in patients with AKI in critically ill patients with sepsis independent of their AKI staging.


Subject(s)
Acute Kidney Injury , Creatinine , Critical Illness , Machine Learning , Sepsis , Humans , Acute Kidney Injury/blood , Acute Kidney Injury/diagnosis , Acute Kidney Injury/etiology , Acute Kidney Injury/classification , Male , Sepsis/blood , Sepsis/complications , Sepsis/classification , Female , Retrospective Studies , Creatinine/blood , Creatinine/analysis , Middle Aged , Aged , Machine Learning/trends , Intensive Care Units/statistics & numerical data , Intensive Care Units/organization & administration , Biomarkers/blood , Biomarkers/analysis , Hospital Mortality
6.
medRxiv ; 2024 Jan 30.
Article in English | MEDLINE | ID: mdl-38352556

ABSTRACT

Importance: Increased intracranial pressure (ICP) is associated with adverse neurological outcomes, but needs invasive monitoring. Objective: Development and validation of an AI approach for detecting increased ICP (aICP) using only non-invasive extracranial physiological waveform data. Design: Retrospective diagnostic study of AI-assisted detection of increased ICP. We developed an AI model using exclusively extracranial waveforms, externally validated it and assessed associations with clinical outcomes. Setting: MIMIC-III Waveform Database (2000-2013), a database derived from patients admitted to an ICU in an academic Boston hospital, was used for development of the aICP model, and to report association with neurologic outcomes. Data from Mount Sinai Hospital (2020-2022) in New York City was used for external validation. Participants: Patients were included if they were older than 18 years, and were monitored with electrocardiograms, arterial blood pressure, respiratory impedance plethysmography and pulse oximetry. Patients who additionally had intracranial pressure monitoring were used for development (N=157) and external validation (N=56). Patients without intracranial monitors were used for association with outcomes (N=1694). Exposures: Extracranial waveforms including electrocardiogram, arterial blood pressure, plethysmography and SpO2. Main Outcomes and Measures: Intracranial pressure > 15 mmHg. Measures were Area under receiver operating characteristic curves (AUROCs), sensitivity, specificity, and accuracy at threshold of 0.5. We calculated odds ratios and p-values for phenotype association. Results: The AUROC was 0.91 (95% CI, 0.90-0.91) on testing and 0.80 (95% CI, 0.80-0.80) on external validation. aICP had accuracy, sensitivity, and specificity of 73.8% (95% CI, 72.0%-75.6%), 99.5% (95% CI 99.3%-99.6%), and 76.9% (95% CI, 74.0-79.8%) on external validation. A ten-percentile increment was associated with stroke (OR=2.12; 95% CI, 1.27-3.13), brain malignancy (OR=1.68; 95% CI, 1.09-2.60), subdural hemorrhage (OR=1.66; 95% CI, 1.07-2.57), intracerebral hemorrhage (OR=1.18; 95% CI, 1.07-1.32), and procedures like percutaneous brain biopsy (OR=1.58; 95% CI, 1.15-2.18) and craniotomy (OR = 1.43; 95% CI, 1.12-1.84; P < 0.05 for all). Conclusions and Relevance: aICP provides accurate, non-invasive estimation of increased ICP, and is associated with neurological outcomes and neurosurgical procedures in patients without intracranial monitoring.

7.
Anesth Analg ; 138(2): 350-357, 2024 Feb 01.
Article in English | MEDLINE | ID: mdl-38215713

ABSTRACT

Remote monitoring and artificial intelligence will become common and intertwined in anesthesiology by 2050. In the intraoperative period, technology will lead to the development of integrated monitoring systems that will integrate multiple data streams and allow anesthesiologists to track patients more effectively. This will free up anesthesiologists to focus on more complex tasks, such as managing risk and making value-based decisions. This will also enable the continued integration of remote monitoring and control towers having profound effects on coverage and practice models. In the PACU and ICU, the technology will lead to the development of early warning systems that can identify patients who are at risk of complications, enabling early interventions and more proactive care. The integration of augmented reality will allow for better integration of diverse types of data and better decision-making. Postoperatively, the proliferation of wearable devices that can monitor patient vital signs and track their progress will allow patients to be discharged from the hospital sooner and receive care at home. This will require increased use of telemedicine, which will allow patients to consult with doctors remotely. All of these advances will require changes to legal and regulatory frameworks that will enable new workflows that are different from those familiar to today's providers.


Subject(s)
Artificial Intelligence , Telemedicine , Humans , Monitoring, Physiologic , Vital Signs , Anesthesiologists
8.
J Clin Anesth ; 93: 111344, 2024 05.
Article in English | MEDLINE | ID: mdl-38007845

ABSTRACT

STUDY OBJECTIVE: Perioperative neuromuscular blocking agents are pharmacologically reversed to minimize complications associated with residual neuromuscular block. Neuromuscular block reversal with anticholinesterases (e.g., neostigmine) require coadministration of an anticholinergic agent (e.g., glycopyrrolate) to mitigate muscarinic activity; however, sugammadex, devoid of cholinergic activity, does not require anticholinergic coadministration. Single-institution studies have found decreased incidence of post-operative urinary retention associated with sugammadex reversal. This study used a multicenter database to better understand the association between neuromuscular block reversal technique and post-operative urinary retention. DESIGN: Retrospective cohort study utilizing large healthcare database. SETTING: Non-profit, non-governmental and community and teaching hospitals and health systems from rural and urban areas. PATIENTS: 61,898 matched adult inpatients and 95,500 matched adult outpatients. INTERVENTIONS: Neuromuscular block reversal with sugammadex or neostigmine plus glycopyrrolate. MEASUREMENTS: Incidence of post-operative urinary retention by neuromuscular block reversal agent and the independent association of neuromuscular block reversal technique and risk of post-operative urinary retention. MAIN RESULTS: The incidence of post-operative urinary retention was 2-fold greater among neostigmine with glycopyrrolate compared to sugammadex patients (5.0% vs 2.4% inpatients; 0.9% vs 0.4% outpatients; both p < 0.0001). Multivariable logistic regression identified reversal with neostigmine to be independently associated with greater risk of post-operative urinary retention (inpatients: odds ratio, 2.20; 95% confidence interval, 2.00 to 2.41; p < 0.001; outpatients: odds ratio, 2.57; 95% confidence interval, 2.13 to 3.10; p < 0.001). Post-operative urinary retention-related visits within 2 days following discharge were five-fold higher among those reversed with neostigmine than sugammadex among inpatients (0.05% vs. 0.01%, respectively; p = 0.018) and outpatients (0.5% vs. 0.1%; p < 0.0001). CONCLUSION: Though this study suggests that neuromuscular block reversal with neostigmine can increase post-operative urinary retention risk, additional studies are needed to fully understand the association.


Subject(s)
Neuromuscular Blockade , Neuromuscular Nondepolarizing Agents , Urinary Retention , Adult , Humans , Neostigmine/adverse effects , Sugammadex/adverse effects , Neuromuscular Blockade/adverse effects , Neuromuscular Blockade/methods , Urinary Retention/chemically induced , Urinary Retention/epidemiology , Glycopyrrolate , Retrospective Studies , Cholinesterase Inhibitors/adverse effects , Postoperative Complications/epidemiology , Postoperative Complications/etiology , Postoperative Complications/prevention & control , Hospitals
10.
J Clin Anesth ; 92: 111295, 2024 02.
Article in English | MEDLINE | ID: mdl-37883900

ABSTRACT

STUDY OBJECTIVE: Explore validation of a model to predict patients' risk of failing extubation, to help providers make informed, data-driven decisions regarding the optimal timing of extubation. DESIGN: We performed temporal, geographic, and domain validations of a model for the risk of reintubation after cardiac surgery by assessing its performance on data sets from three academic medical centers, with temporal validation using data from the institution where the model was developed. SETTING: Three academic medical centers in the United States. PATIENTS: Adult patients arriving in the cardiac intensive care unit with an endotracheal tube in place after cardiac surgery. INTERVENTIONS: Receiver operating characteristic (ROC) curves and concordance statistics were used as measures of discriminative ability, and calibration curves and Brier scores were used to assess the model's predictive ability. MEASUREMENTS: Temporal validation was performed in 1642 patients with a reintubation rate of 4.8%, with the model demonstrating strong discrimination (optimism-corrected c-statistic 0.77) and low predictive error (Brier score 0.044) but poor model precision and recall (Optimal F1 score 0.29). Combined domain and geographic validation were performed in 2041 patients with a reintubation rate of 1.5%. The model displayed solid discriminative ability (optimism-corrected c-statistic = 0.73) and low predictive error (Brier score = 0.0149) but low precision and recall (Optimal F1 score = 0.13). Geographic validation was performed in 2489 patients with a reintubation rate of 1.6%, with the model displaying good discrimination (optimism-corrected c-statistic = 0.71) and predictive error (Brier score = 0.0152) but poor precision and recall (Optimal F1 score = 0.13). MAIN RESULTS: The reintubation model displayed strong discriminative ability and low predictive error within each validation cohort. CONCLUSIONS: Future work is needed to explore how to optimize models before local implementation.


Subject(s)
Cardiac Surgical Procedures , Adult , Humans , Retrospective Studies , Cardiac Surgical Procedures/adverse effects , Intensive Care Units , Intubation, Intratracheal/adverse effects
11.
BJA Open ; 8: 100236, 2023 Dec.
Article in English | MEDLINE | ID: mdl-38026082

ABSTRACT

Background: International guidelines recommend quantitative neuromuscular monitoring when administering neuromuscular blocking agents. The train-of-four count is important for determining the depth of block and appropriate reversal agents and doses. However, identifying valid compound motor action potentials (cMAPs) during surgery can be challenging because of low-amplitude signals and an inability to observe motor responses. A convolutional neural network (CNN) to classify cMAPs as valid or not might improve the accuracy of such determinations. Methods: We modified a high-accuracy CNN originally developed to identify handwritten numbers. For training, we used digitised electromyograph waveforms (TetraGraph) from a previous study of 29 patients and tuned the model parameters using leave-one-out cross-validation. External validation used a dataset of 19 patients from another study with the same neuromuscular block monitor but with different patient, surgical, and protocol characteristics. All patients underwent ulnar nerve stimulation at the wrist and the surface electromyogram was recorded from the adductor pollicis muscle. Results: The tuned CNN performed highly on the validation dataset, with an accuracy of 0.9997 (99% confidence interval 0.9994-0.9999) and F1 score=0.9998. Performance was equally good for classifying the four individual responses in the train-of-four sequence. The calibration plot showed excellent agreement between the predicted probabilities and the actual prevalence of valid cMAPs. Ten-fold cross-validation using all data showed similar high performance. Conclusions: The CNN distinguished valid cMAPs from artifacts after ulnar nerve stimulation at the wrist with >99.5% accuracy. Incorporation of such a process within quantitative electromyographic neuromuscular block monitors is feasible.

12.
Ann Intern Med ; 176(10): 1358-1369, 2023 10.
Article in English | MEDLINE | ID: mdl-37812781

ABSTRACT

BACKGROUND: Substantial effort has been directed toward demonstrating uses of predictive models in health care. However, implementation of these models into clinical practice may influence patient outcomes, which in turn are captured in electronic health record data. As a result, deployed models may affect the predictive ability of current and future models. OBJECTIVE: To estimate changes in predictive model performance with use through 3 common scenarios: model retraining, sequentially implementing 1 model after another, and intervening in response to a model when 2 are simultaneously implemented. DESIGN: Simulation of model implementation and use in critical care settings at various levels of intervention effectiveness and clinician adherence. Models were either trained or retrained after simulated implementation. SETTING: Admissions to the intensive care unit (ICU) at Mount Sinai Health System (New York, New York) and Beth Israel Deaconess Medical Center (Boston, Massachusetts). PATIENTS: 130 000 critical care admissions across both health systems. INTERVENTION: Across 3 scenarios, interventions were simulated at varying levels of clinician adherence and effectiveness. MEASUREMENTS: Statistical measures of performance, including threshold-independent (area under the curve) and threshold-dependent measures. RESULTS: At fixed 90% sensitivity, in scenario 1 a mortality prediction model lost 9% to 39% specificity after retraining once and in scenario 2 a mortality prediction model lost 8% to 15% specificity when created after the implementation of an acute kidney injury (AKI) prediction model; in scenario 3, models for AKI and mortality prediction implemented simultaneously, each led to reduced effective accuracy of the other by 1% to 28%. LIMITATIONS: In real-world practice, the effectiveness of and adherence to model-based recommendations are rarely known in advance. Only binary classifiers for tabular ICU admissions data were simulated. CONCLUSION: In simulated ICU settings, a universally effective model-updating approach for maintaining model performance does not seem to exist. Model use may have to be recorded to maintain viability of predictive modeling. PRIMARY FUNDING SOURCE: National Center for Advancing Translational Sciences.


Subject(s)
Acute Kidney Injury , Artificial Intelligence , Humans , Intensive Care Units , Critical Care , Delivery of Health Care
13.
medRxiv ; 2023 Sep 07.
Article in English | MEDLINE | ID: mdl-37732187

ABSTRACT

Kidney disease affects 50% of all diabetic patients; however, prediction of disease progression has been challenging due to inherent disease heterogeneity. We use deep learning to identify novel genetic signatures prognostically associated with outcomes. Using autoencoders and unsupervised clustering of electronic health record data on 1,372 diabetic kidney disease patients, we establish two clusters with differential prevalence of end-stage kidney disease. Exome-wide associations identify a novel variant in ARHGEF18, a Rho guanine exchange factor specifically expressed in glomeruli. Overexpression of ARHGEF18 in human podocytes leads to impairments in focal adhesion architecture, cytoskeletal dynamics, cellular motility, and RhoA/Rac1 activation. Mutant GEF18 is resistant to ubiquitin mediated degradation leading to pathologically increased protein levels. Our findings uncover the first known disease-causing genetic variant that affects protein stability of a cytoskeletal regulator through impaired degradation, a potentially novel class of expression quantitative trait loci that can be therapeutically targeted.

14.
Nutr Metab Cardiovasc Dis ; 33(11): 2189-2198, 2023 Nov.
Article in English | MEDLINE | ID: mdl-37567789

ABSTRACT

BACKGROUND AND AIMS: Ectopic lipid storage is implicated in type 2 diabetes pathogenesis; hence, exercise to deplete stores (i.e., at the intensity that allows for maximal rate of lipid oxidation; MLO) might be optimal for restoring metabolic health. This intensity ("Fatmax") is estimated during incremental exercise ("Fatmax test"). However, in "the field" general recommendations exist regarding a range of percentages of maximal heart rate (HR) to elicit MLO. The degree to which this range is aligned with measured Fatmax has not been investigated. We compared measured HR at Fatmax, with maximal HR percentages within the typically recommended range in a sample of 26 individuals (Female: n = 11, European ancestry: n = 17). METHODS AND RESULTS: Subjects completed a modified Fatmax test with a 5-min warmup, followed by incremental stages starting at 15 W with work rate increased by 15 W every 5 min until termination criteria were reached. Pulmonary gas exchange was recorded and average values for V˙ o2 and V˙ co2 for the final minute of each stage were used to estimate substrate-oxidation rates. We modeled lipid-oxidation kinetics using a sinusoidal model and expressed MLO relative to peak V˙ o2 and HR. Bland-Altman analysis demonstrated lack of concordance between HR at Fatmax and at 50%, 70%, and 80% of age-predicted maximum with a mean difference of 23 b·min-1. CONCLUSION: Our results indicate that estimated "fat-burning" heart rate zones are inappropriate for prescribing exercise to elicit MLO and we recommend direct individual exercise lipid oxidation measurements to elicit these values.

16.
Adv Kidney Dis Health ; 30(1): 53-60, 2023 01.
Article in English | MEDLINE | ID: mdl-36723283

ABSTRACT

Acute kidney injury (AKI) is a common complication after a surgery, especially in cardiac and aortic procedures, and has a significant impact on morbidity and mortality. Early identification of high-risk patients and providing effective prevention and therapeutic approach are the main strategies for reducing the possibility of perioperative AKI. Consequently, several risk-prediction models and risk assessment scores have been developed for the prediction of perioperative AKI. However, a majority of these risk scores are only derived from preoperative data while the intraoperative time-series monitoring data such as heart rate and blood pressure were not included. Moreover, the complexity of the pathophysiology of AKI, as well as its nonlinear and heterogeneous nature, imposes limitations on the use of linear statistical techniques. The development of clinical medicine's digitization, the widespread availability of electronic medical records, and the increase in the use of continuous monitoring have generated vast quantities of data. Machine learning has recently shown promise as a method for automatically integrating large amounts of data in predicting the risk of perioperative outcomes. In this article, we discussed the development, limitations of existing work, and the potential future direction of models using machine learning techniques to predict AKI after a surgery.


Subject(s)
Acute Kidney Injury , Artificial Intelligence , Humans , Acute Kidney Injury/diagnosis , Risk Assessment/methods , Risk Factors , Machine Learning
17.
Anesth Analg ; 136(1): 111-122, 2023 01 01.
Article in English | MEDLINE | ID: mdl-36534718

ABSTRACT

BACKGROUND: A single laboratory range for all individuals may fail to take into account underlying physiologic differences based on sex and genetic factors. We hypothesized that laboratory distributions differ based on self-reported sex and ethnicity and that ranges stratified by these factors better correlate with postoperative mortality and acute kidney injury (AKI). METHODS: Results from metabolic panels, complete blood counts, and coagulation panels for patients in outpatient encounters were identified from our electronic health record. Patients were grouped based on self-reported sex (2 groups) and ethnicity (6 groups). Stratified ranges were set to be the 2.5th/97.5th percentile for each sex/ethnic group. For patients undergoing procedures, each patient/laboratory result was classified as normal/abnormal using the stratified and nonstratified (traditional) ranges; overlap in the definitions was assessed between the 2 classifications by looking for the percentage of agreement in result classifications of normal/abnormal using the 2 methods. To assess which definitions of normal are most associated with adverse postoperative outcomes, the odds ratio (OR) for each outcome/laboratory result pair was assessed, and the frequency that the confidence intervals of ORs for the stratified versus nonstratified range did not overlap was examined. RESULTS: Among the 300 unique combinations (race × sex × laboratory type), median proportion overlap (meaning patient was either "normal" or "abnormal" for both methodologies) was 0.86 [q1, 0.80; q3, 0.89]. All laboratory results except 6 overlapped at least 80% of the time. The frequency of overlap did not differ among the racial/ethnic groups. In cases where the ORs were different, the stratified range was better associated with both AKI and mortality (P < .001). There was no trend of bias toward any specific sex/ethnic group. CONCLUSIONS: Baseline "normal" laboratory values differ across sex and ethnic groups, and ranges stratified by these groups are better associated with postoperative AKI and mortality as compared to the standard reference ranges.


Subject(s)
Acute Kidney Injury , Ethnicity , Humans , Retrospective Studies , Reference Values , Patient Reported Outcome Measures
18.
Transplant Direct ; 8(10): e1380, 2022 Oct.
Article in English | MEDLINE | ID: mdl-36204192

ABSTRACT

Intraoperative hypotension (IOH) is common and associated with mortality in major surgery. Although patients undergoing liver transplantation (LT) have low baseline blood pressure, the relation between blood pressure and mortality in LT is not well studied. We aimed to determine mean arterial pressure (MAP) that was associated with 30-d mortality in LT. Methods: We performed a retrospective cohort study. The data included patient demographics, pertinent preoperative and intraoperative variables, and MAP using various metrics and thresholds. The endpoint was 30-d mortality after LT. Results: One thousand one hundred seventy-eight patients from 2013 to 2020 were included. A majority of patients were exposed to IOH and many for a long period. Eighty-nine patients (7.6%) died within 30 d after LT. The unadjusted analysis showed that predicted mortality was associated with MAP <45 to 60 mm Hg but not MAP <65 mm Hg. The association between MAP and mortality was further tested using adjustment and various duration cutoffs. After adjustment, the shortest durations for MAPs <45, 50, and 55 mm Hg associated with 30-d mortality were 6, 10, and 25 min (odds ratio, 1.911, 1.812, and 1.772; 95% confidence interval, 1.100-3.320, 1.039-3.158, and 1.008-3.114; P = 0.002, 0.036, and 0.047), respectively. Exposure to MAP <60 mm Hg up to 120 min was not associated with increased mortality. Conclusion: In this large retrospective study, we found IOH was common during LT. Intraoperative MAP <55 mm Hg was associated with increased 30-d mortality after LT, and the duration associated with postoperative mortality was shorter with lower MAP than with higher MAP.

19.
Medicine (Baltimore) ; 101(42): e31176, 2022 Oct 21.
Article in English | MEDLINE | ID: mdl-36281117

ABSTRACT

METHODS: The EHR of 32734 patients >18 years of age undergoing surgery and had POD assessment were reviewed. Patient characteristics and study variables were summarized between delirium groups. We constructed univariate logistic regression models for POD using each study variable to estimate odds ratios (OR) and constructed a multivariable logistic regression model with stepwise variable selection. In order to create a clinically useful/implementable tool we created a nomogram to predict risk of delirium. RESULTS: Overall, we found a rate of POD of 3.7% across our study population. The Model achieved an AUC of the ROC curve of 0.83 (95% CI 0.82-0.84). We found that age, increased American Society of Anesthesiologists (ASA) score (ASA 3-4 OR 2.81, CI 1.49-5.28, P < .001), depression (OR 1.28, CI 1.12-1.47, P < .001), postoperative benzodiazepine use (OR 3.52, CI 3.06-4.06, P < .001) and urgent cases (Urgent OR 3.51, CI 2.92-4.21, P < .001; Emergent OR 3.99, CI 3.21-4.96, P < .001; Critically Emergent OR 5.30, CI 3.53-7.96, P < .001) were associated with POD. DISCUSSION: We were able to distinguish the contribution of individual risk factors to the development of POD. We created a clinically useful easy-to-use tool that has the potential to accurately identify those at high-risk of delirium, a first step to prevent POD.


Subject(s)
Delirium , Postoperative Complications , Humans , Retrospective Studies , Postoperative Complications/etiology , Delirium/diagnosis , Delirium/epidemiology , Delirium/etiology , Risk Factors , Benzodiazepines
20.
Anesth Analg ; 135(5): 1057-1063, 2022 11 01.
Article in English | MEDLINE | ID: mdl-36066480

ABSTRACT

BACKGROUND: Visual analytics is the science of analytical reasoning supported by interactive visual interfaces called dashboards. In this report, we describe our experience addressing the challenges in visual analytics of anesthesia electronic health record (EHR) data using a commercially available business intelligence (BI) platform. As a primary outcome, we discuss some performance metrics of the dashboards, and as a secondary outcome, we outline some operational enhancements and financial savings associated with deploying the dashboards. METHODS: Data were transferred from the EHR to our departmental servers using several parallel processes. A custom structured query language (SQL) query was written to extract the relevant data fields and to clean the data. Tableau was used to design multiple dashboards for clinical operation, performance improvement, and business management. RESULTS: Before deployment of the dashboards, detailed case counts and attributions were available for the operating rooms (ORs) from perioperative services; however, the same level of detail was not available for non-OR locations. Deployment of the yearly case count dashboards provided near-real-time case count information from both central and non-OR locations among multiple campuses, which was not previously available. The visual presentation of monthly data for each year allowed us to recognize seasonality in case volumes and adjust our supply chain to prevent shortages. The dashboards highlighted the systemwide volume of cases in our endoscopy suites, which allowed us to target these supplies for pricing negotiations, with an estimated annual cost savings of $250,000. Our central venous pressure (CVP) dashboard enabled us to provide individual practitioner feedback, thus increasing our monthly CVP checklist compliance from approximately 92% to 99%. CONCLUSIONS: The customization and visualization of EHR data are both possible and worthwhile for the leveraging of information into easily comprehensible and actionable data for the improvement of health care provision and practice management. Limitations inherent to EHR data presentation make this customization necessary, and continued open access to the underlying data set is essential.


Subject(s)
Anesthesia , Anesthesiology , Electronic Health Records , Benchmarking , Operating Rooms
SELECTION OF CITATIONS
SEARCH DETAIL
...