Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 53
Filter
1.
Article in English | MEDLINE | ID: mdl-38643047

ABSTRACT

BACKGROUND: Few studies have described the insights of frontline health care providers and patients on how the diagnostic process can be improved in the emergency department (ED), a setting at high risk for diagnostic errors. The authors aimed to identify the perspectives of providers and patients on the diagnostic process and identify potential interventions to improve diagnostic safety. METHODS: Semistructured interviews were conducted with 10 ED physicians, 15 ED nurses, and 9 patients/caregivers at two separate health systems. Interview questions were guided by the ED-Adapted National Academies of Sciences, Engineering, and Medicine Diagnostic Process Framework and explored participant perspectives on the ED diagnostic process, identified vulnerabilities, and solicited interventions to improve diagnostic safety. The authors performed qualitative thematic analysis on transcribed interviews. RESULTS: The research team categorized vulnerabilities in the diagnostic process and intervention opportunities based on the ED-Adapted Framework into five domains: (1) team dynamics and communication (for example, suboptimal communication between referring physicians and the ED team); (2) information gathering related to patient presentation (for example, obtaining the history from the patients or their caregivers; (3) ED organization, system, and processes (for example, staff schedules and handoffs); (4) patient education and self-management (for example, patient education at discharge from the ED); and (5) electronic health record and patient portal use (for example, automatic release of test results into the patient portal). The authors identified 33 potential interventions, of which 17 were provider focused and 16 were patient focused. CONCLUSION: Frontline providers and patients identified several vulnerabilities and potential interventions to improve ED diagnostic safety. Refining, implementing, and evaluating the efficacy of these interventions are required.

2.
J Crit Care ; 82: 154784, 2024 Aug.
Article in English | MEDLINE | ID: mdl-38503008

ABSTRACT

BACKGROUND: Vancomycin is a renally eliminated, nephrotoxic, glycopeptide antibiotic with a narrow therapeutic window, widely used in intensive care units (ICU). We aimed to predict the risk of inappropriate vancomycin trough levels and appropriate dosing for each ICU patient. METHODS: Observed vancomycin trough levels were categorized into sub-therapeutic, therapeutic, and supra-therapeutic levels to train and compare different classification models. We included adult ICU patients (≥ 18 years) with at least one vancomycin concentration measurement during hospitalization at Mayo Clinic, Rochester, MN, from January 2007 to December 2017. RESULT: The final cohort consisted of 5337 vancomycin courses. The XGBoost models outperformed other machine learning models with the AUC-ROC of 0.85 and 0.83, specificity of 53% and 47%, and sensitivity of 94% and 94% for sub- and supra-therapeutic categories, respectively. Kinetic estimated glomerular filtration rate and other creatinine-based measurements, vancomycin regimen (dose and interval), comorbidities, body mass index, age, sex, and blood pressure were among the most important variables in the models. CONCLUSION: We developed models to assess the risk of sub- and supra-therapeutic vancomycin trough levels to improve the accuracy of drug dosing in critically ill patients.


Subject(s)
Anti-Bacterial Agents , Intensive Care Units , Machine Learning , Vancomycin , Humans , Vancomycin/pharmacokinetics , Vancomycin/administration & dosage , Vancomycin/blood , Female , Male , Anti-Bacterial Agents/administration & dosage , Anti-Bacterial Agents/pharmacokinetics , Middle Aged , Aged , Critical Illness , Drug Monitoring/methods , Adult , Retrospective Studies
3.
J Crit Care ; 75: 154278, 2023 06.
Article in English | MEDLINE | ID: mdl-36774817

ABSTRACT

PURPOSE: We developed and validated two parsimonious algorithms to predict the time of diagnosis of any stage of acute kidney injury (any-AKI) or moderate-to-severe AKI in clinically actionable prediction windows. MATERIALS AND METHODS: In this retrospective single-center cohort of adult ICU admissions, we trained two gradient-boosting models: 1) any-AKI model, predicting the risk of any-AKI at least 6 h before diagnosis (50,342 admissions), and 2) moderate-to-severe AKI model, predicting the risk of moderate-to-severe AKI at least 12 h before diagnosis (39,087 admissions). Performance was assessed before disease diagnosis and validated prospectively. RESULTS: The models achieved an area under the receiver operating characteristic curve (AUROC) of 0.756 at six hours (any-AKI) and 0.721 at 12 h (moderate-to-severe AKI) prior. Prospectively, both models had high positive predictive values (0.796 and 0.546 for any-AKI and moderate-to-severe AKI models, respectively) and triggered more in patients who developed AKI vs. those who did not (median of 1.82 [IQR 0-4.71] vs. 0 [IQR 0-0.73] and 2.35 [IQR 0.14-4.96] vs. 0 [IQR 0-0.8] triggers per 8 h for any-AKI and moderate-to-severe AKI models, respectively). CONCLUSIONS: The two AKI prediction models have good discriminative performance using common features, which can aid in accurately and informatively monitoring AKI risk in ICU patients.


Subject(s)
Acute Kidney Injury , Hospitalization , Adult , Humans , Retrospective Studies , Prospective Studies , ROC Curve , Acute Kidney Injury/diagnosis , Intensive Care Units
4.
J Med Syst ; 46(11): 72, 2022 Sep 26.
Article in English | MEDLINE | ID: mdl-36156743

ABSTRACT

Recent use of noninvasive and continuous hemoglobin (SpHb) concentration monitor has emerged as an alternative to invasive laboratory-based hematological analysis. Unlike delayed laboratory based measures of hemoglobin (HgB), SpHb monitors can provide real-time information about the HgB levels. Real-time SpHb measurements will offer healthcare providers with warnings and early detections of abnormal health status, e.g., hemorrhagic shock, anemia, and thus support therapeutic decision-making, as well as help save lives. However, the finger-worn CO-Oximeter sensors used in SpHb monitors often get detached or have to be removed, which causes missing data in the continuous SpHb measurements. Missing data among SpHb measurements reduce the trust in the accuracy of the device, influence the effectiveness of hemorrhage interventions and future HgB predictions. A model with imputation and prediction method is investigated to deal with missing values and improve prediction accuracy. The Gaussian process and functional regression methods are proposed to impute missing SpHb data and make predictions on laboratory-based HgB measurements. Within the proposed method, multiple choices of sub-models are considered. The proposed method shows a significant improvement in accuracy based on a real-data study. Proposed method shows superior performance with the real data, within the proposed framework, different choices of sub-models are discussed and the usage recommendation is provided accordingly. The modeling framework can be extended to other application scenarios with missing values.


Subject(s)
Hemoglobins , Oximetry , Hemoglobins/analysis , Hemorrhage , Humans , Monitoring, Physiologic/methods , Normal Distribution
5.
J Eval Clin Pract ; 28(1): 120-128, 2022 02.
Article in English | MEDLINE | ID: mdl-34309137

ABSTRACT

BACKGROUND: Hospitals face the challenge of managing demand for limited computed tomography (CT) resources from multiple patient types while ensuring timely access. METHODS: A discrete event simulation model was created to evaluate CT access time for emergency department (ED) patients at a large academic medical center with six unique CT machines that serve unscheduled emergency, semi-scheduled inpatient, and scheduled outpatient demand. Three operational interventions were tested: adding additional patient transporters, using an alternative creatinine lab, and adding a registered nurse dedicated to monitoring CT patients in the ED. RESULTS: All interventions improved access times. Adding one or two transporters improved ED access times by up to 9.8 minutes (Mann-Whitney (MW) CI: [-11.0,-8.7]) and 10.3 minutes (MW CI [-11.5, -9.2]). The alternative creatinine and RN interventions provided 3-minute (MW CI: [-4.0, -2.0]) and 8.5-minute (MW CI: [-9.7, -8.3]) improvements. CONCLUSIONS: Adding one transporter provided the greatest combination of reduced delay and ability to implement. The projected simulation improvements have been realized in practice.


Subject(s)
Emergency Service, Hospital , Radiology , Computer Simulation , Humans , Radiography , Tomography, X-Ray Computed
6.
J Biomed Inform ; 126: 103975, 2022 02.
Article in English | MEDLINE | ID: mdl-34906736

ABSTRACT

Uncontrolled hemorrhage is a leading cause of preventable death among patients with trauma. Early recognition of hemorrhage can aid in the decision to administer blood transfusion and improve patient outcomes. To provide real-time measurement and continuous monitoring of hemoglobin concentration, the non-invasive and continuous hemoglobin (SpHb) measurement device has drawn extensive attention in clinical practice. However, the accuracy of such a device varies in different scenarios, so the use is not yet widely accepted. This article focuses on using statistical nonparametric models to improve the accuracy of SpHb measurement device by considering measurement bias among instantaneous measurements and individual evolution trends. In the proposed method, the robust locally estimated scatterplot smoothing (LOESS) method and the Kernel regression model are considered to address those issues. Overall performance of the proposed method was evaluated by cross-validation, which showed a substantial improvement in accuracy with an 11.3% reduction of standard deviation, 23.7% reduction of mean absolute error, and 28% reduction of mean absolute percentage error compared to the original measurements. The effects of patient demographics and initial medical condition were analyzed and deemed to not have a significant effect on accuracy. Because of its high accuracy, the proposed method is highly promising to be considered to support transfusion decision-making and continuous monitoring of hemoglobin concentration. The method also has promise for similar advancement of other diagnostic devices in healthcare.


Subject(s)
Hemoglobins , Oximetry , Hematologic Tests , Hemoglobins/analysis , Hemorrhage , Humans , Oximetry/methods
7.
Annu Int Conf IEEE Eng Med Biol Soc ; 2021: 2386-2391, 2021 11.
Article in English | MEDLINE | ID: mdl-34891762

ABSTRACT

Clinicians and staff who work in intense hospital settings such as the emergency department (ED) are under an extended amount of mental and physical pressure every day. They may spend hours in active physical pressure to serve patients with severe injuries or stay in front of a computer to review patients' clinical history and update the patients' electronic health records (EHR). Nurses on the other hand may stay for multiple consecutive days of 9-12 working hours. The amount of pressure is so much that they usually end up taking days off to recover the lost energy. Both of these extreme cases of low and high physical activities are shown to affect the physical and mental health of clinicians and may even lead to fatigue and burnout.In this study Real-Time location systems (RTLS) are used for the first time, to study the amount of physical activity exerted by clinicians. RTLS systems have traditionally been used in hospital settings for locating staff and equipment, whereas our proposed method combines both time and location information together to estimate the duration, length, and speed of movements within hospital wards such as the ED. It is also our first step towards utilizing non-wearable devices to measure sedentary behavior inside the ED. This information helps to assess the workload on the care team and identify means to reduce the risk of performance compromise, fatigue, and burnout.We used one year worth of raw RFID data that covers movement records of 38 physicians, 13 residents, 163 nurses, 33 staff in the ED. We defined a walking path as the continuous sequences of movements and stops and identified separate walking paths for each individual on each day. Walking duration, distance, and speed, along with the number of steps and the duration of sedentary behavior, are then estimated for each walking path. We compared our results to the values reported in the literature and showed despite the low spatial resolution of RTLS, our non-invasive estimations are closely comparable to the ones measured by Fitbit or other wearable pedometers.Clinical Relevance- Adequate assessment of workload in a dynamic care delivery space plays an important role in ensuring safe and optimal care delivery [7]. Systems capable of measuring physical activities on a continuous basis during daily work can provide precious information for a variety of purposes including automated assessment of sedentary behaviors and early detection of work pressure. Such systems could help facilitate targeted changes in the number of staff, duration of their working shifts leading to a safer and healthier environment for both clinicians and patients.


Subject(s)
Physicians , Walking , Computer Systems , Emergency Service, Hospital , Exercise , Humans
8.
Am J Nephrol ; 52(9): 753-762, 2021.
Article in English | MEDLINE | ID: mdl-34569522

ABSTRACT

INTRODUCTION: Comparing current to baseline serum creatinine is important in detecting acute kidney injury. In this study, we report a regression-based machine learning model to predict baseline serum creatinine. METHODS: We developed and internally validated a gradient boosting model on patients admitted in Mayo Clinic intensive care units from 2005 to 2017 to predict baseline creatinine. The model was externally validated on the Medical Information Mart for Intensive Care III (MIMIC III) cohort in all ICU admissions from 2001 to 2012. The predicted baseline creatinine from the model was compared with measured serum creatinine levels. We compared the performance of our model with that of the backcalculated estimated serum creatinine from the Modification of Diet in Renal Disease (MDRD) equation. RESULTS: Following ascertainment of eligibility criteria, 44,370 patients from the Mayo Clinic and 6,112 individuals from the MIMIC III cohort were enrolled. Our model used 6 features from the Mayo Clinic and MIMIC III datasets, including the presence of chronic kidney disease, weight, height, and age. Our model had significantly lower error than the MDRD backcalculation (mean absolute error [MAE] of 0.248 vs. 0.374 in the Mayo Clinic test data; MAE of 0.387 vs. 0.465 in the MIMIC III cohort) and higher correlation (intraclass correlation coefficient [ICC] of 0.559 vs. 0.050 in the Mayo Clinic test data; ICC of 0.357 vs. 0.030 in the MIMIC III cohort). DISCUSSION/CONCLUSION: Using machine learning models, baseline serum creatinine could be estimated with higher accuracy than the backcalculated estimated serum creatinine level.


Subject(s)
Creatinine/blood , Machine Learning , Adult , Aged , Aged, 80 and over , Cohort Studies , Female , Hospitalization , Humans , Male , Middle Aged
9.
J Biomed Inform ; 123: 103895, 2021 11.
Article in English | MEDLINE | ID: mdl-34450286

ABSTRACT

BACKGROUND: The progression of many degenerative diseases is tracked periodically using scales evaluating functionality in daily activities. Although estimating the timing of critical events (i.e., disease tollgates) during degenerative disease progression is desirable, the necessary data may not be readily available in scale records. Further, analysis of disease progression poses data challenges, such as censoring and misclassification errors, which need to be addressed to provide meaningful research findings and inform patients. METHODS: We developed a novel binary classification approach to map scale scores into disease tollgates to describe disease progression leveraging standard/modified Kaplan-Meier analyses. The approach is demonstrated by estimating progression pathways in amyotrophic lateral sclerosis (ALS). Tollgate-based ALS Staging System (TASS) specifies the critical events (i.e., tollgates) in ALS progression. We first developed a binary classification predicting whether each TASS tollgate was passed given the itemized ALSFRS-R scores using 514 ALS patients' data from Mayo Clinic-Rochester. Then, we utilized the binary classification to translate/map the ALSFRS-R data of 3,264 patients from the PRO-ACT database into TASS. We derived the time trajectories of ALS progression through tollgates from the augmented PRO-ACT data using Kaplan-Meier analyses. The effects of misclassification errors, condition-dependent dropouts, and censored data in trajectory estimations were evaluated with Interval Censored Kaplan Meier Analysis and Multistate Model for Panel Data. RESULTS: The approach using Mayo Clinic data accurately estimated tollgate-passed states of patients given their itemized ALSFRS-R scores (AUCs > 0.90). The tollgate time trajectories derived from the augmented PRO-ACT dataset provide valuable insights; we predicted that the majority of the ALS patients would have modified arm function (67%) and require assistive devices for walking (53%) by the second year after ALS onset. By the third year, most (74%) ALS patients would occasionally use a wheelchair, while 48% of the ALS patients would be wheelchair-dependent by the fourth year. Assistive speech devices and feeding tubes were needed in 49% and 30% of the patients by the third year after ALS onset, respectively. The onset body region alters some tollgate passage time estimations by 1-2 years. CONCLUSIONS: The estimated tollgate-based time trajectories inform patients and clinicians about prospective assistive device needs and life changes. More research is needed to personalize these estimations according to prognostic factors. Further, the approach can be leveraged in the progression of other diseases.


Subject(s)
Amyotrophic Lateral Sclerosis , Disease Progression , Humans , Prospective Studies , Speech , Walking
10.
JMIR Res Protoc ; 10(6): e24642, 2021 Jun 14.
Article in English | MEDLINE | ID: mdl-34125077

ABSTRACT

BACKGROUND: Diagnostic decision making, especially in emergency departments, is a highly complex cognitive process that involves uncertainty and susceptibility to errors. A combination of factors, including patient factors (eg, history, behaviors, complexity, and comorbidity), provider-care team factors (eg, cognitive load and information gathering and synthesis), and system factors (eg, health information technology, crowding, shift-based work, and interruptions) may contribute to diagnostic errors. Using electronic triggers to identify records of patients with certain patterns of care, such as escalation of care, has been useful to screen for diagnostic errors. Once errors are identified, sophisticated data analytics and machine learning techniques can be applied to existing electronic health record (EHR) data sets to shed light on potential risk factors influencing diagnostic decision making. OBJECTIVE: This study aims to identify variables associated with diagnostic errors in emergency departments using large-scale EHR data and machine learning techniques. METHODS: This study plans to use trigger algorithms within EHR data repositories to generate a large data set of records that are labeled trigger-positive or trigger-negative, depending on whether they meet certain criteria. Samples from both data sets will be validated using medical record reviews, upon which we expect to find a higher number of diagnostic safety events in the trigger-positive subset. Machine learning will be used to evaluate relationships between certain patient factors, provider-care team factors, and system-level risk factors and diagnostic safety signals in the statistically matched groups of trigger-positive and trigger-negative charts. RESULTS: This federally funded study was approved by the institutional review board of 2 academic medical centers with affiliated community hospitals. Trigger queries are being developed at both organizations, and sample cohorts will be labeled using the triggers. Machine learning techniques such as association rule mining, chi-square automated interaction detection, and classification and regression trees will be used to discover important variables that could be incorporated within future clinical decision support systems to help identify and reduce risks that contribute to diagnostic errors. CONCLUSIONS: The use of large EHR data sets and machine learning to investigate risk factors (related to the patient, provider-care team, and system-level) in the diagnostic process may help create future mechanisms for monitoring diagnostic safety. INTERNATIONAL REGISTERED REPORT IDENTIFIER (IRRID): DERR1-10.2196/24642.

11.
J Crit Care ; 62: 283-288, 2021 04.
Article in English | MEDLINE | ID: mdl-33508763

ABSTRACT

PURPOSE: Acute kidney injury (AKI) is a prevalent and detrimental condition in intensive care unit patients. Most AKI predictive models only predict creatinine-triggered AKI (AKICr) and might underperform when predicting urine-output-triggered AKI (AKIUO). We aimed to describe how admission AKICr prediction models perform in all AKI patients. MATERIALS AND METHODS: Three types of models were trained: 1) pAKIany, predicting AKI based on creatinine or urine output, 2) pAKIUO, predicting AKI based only on urine output, and 3) pAKICr, predicting AKI based only on creatinine. We compared model performance and predictive features. RESULTS: The pAKIany models had the best overall performance (AUROC 0.673-0.716) and the most consistent performance across three patient cohorts grouped by type of AKI trigger (min AUROC of 0.636). The pAKICr models had fair performance in predicting AKICr (AUROCs 0.702-0.748) but poor performance predicting AKIUO (AUROCs 0.581-0.695). The predictive features for the pAKICr models and pAKIUO models were distinct, while top features for the pAKIany models were consistently a combination of those for the pAKICr and pAKIUO models. CONCLUSION: Ignoring urine output in the outcome during model training resulted in models that are unlikely to predict AKIUO adequately and may miss a substantial proportion of patients in practice.


Subject(s)
Acute Kidney Injury , Acute Kidney Injury/diagnosis , Creatinine , Critical Care , Hospitalization , Humans , Machine Learning
12.
J Med Syst ; 45(1): 15, 2021 Jan 07.
Article in English | MEDLINE | ID: mdl-33411118

ABSTRACT

The ability of a Real Time Location System (RTLS) to provide correct information in a clinical environment is an important consideration in evaluating the effectiveness of the technology. While past efforts describe how well the technology performed in a lab environment, the performance of such technology has not been specifically defined or evaluated in a practice setting involving workflow and movement. Clinical environments pose complexity owing to various layouts and various movements. Further, RTL systems are not equipped to provide true negative information (where an entity is not located). Hence, this study defined sensitivity and precision in this context, and developed a simulation protocol to serve as a systematic testing framework using actors in a clinical environment. The protocol was used to measure the sensitivity and precision of an RTL system in the emergency department space of a quaternary care medical center. The overall sensitivity and precision were determined to be 84 and 93% respectively. These varied for patient rooms, staff area, hallway and other rooms.


Subject(s)
Computer Systems , Emergency Service, Hospital , Computer Simulation , Hospitals , Humans , Workflow
13.
J Patient Saf ; 17(8): e1458-e1464, 2021 Dec 01.
Article in English | MEDLINE | ID: mdl-30431553

ABSTRACT

OBJECTIVES: This study was conducted to describe patients at risk for prolonged time alone in the emergency department (ED) and to determine the relationship between clinical outcomes, specifically 30-day hospitalization, and patient alone time (PAT) in the ED. METHODS: An observational cohort design was used to evaluate PAT and patient characteristics in the ED. The study was conducted in a tertiary academic ED that has both adult and pediatric ED facilities and of patients placed in an acute care room for treatment between May 1 and July 31, 2016, excluding behavioral health patients. Simple linear regression and t tests were used to evaluate the relationship between patient characteristics and PAT. Logistic regression was used to evaluate the relationship between 30-day hospitalization and PAT. RESULTS: Pediatric patients had the shortest total PAT compared with all older age groups (86.4 minutes versus 131 minutes, P < 0.001). Relationships were seen between PAT and patient characteristics, including age, geographic region, and the severity and complexity of the health condition. Controlling for Charlson comorbidity index and other potentially confounding variables, a logistic regression model showed that patients are more likely to be hospitalized within 30 days after their ED visit, with an odds ratio (95% confidence interval) of 1.056 (1.017-1.097) for each additional hour of PAT. CONCLUSIONS: Patient alone time is not equal among all patient groups. Study results indicate that PAT is significantly associated with 30-day hospitalization. This conclusion indicates that PAT may affect patient outcomes and warrants further investigation.


Subject(s)
Emergency Service, Hospital , Hospitalization , Adult , Aged , Child , Cohort Studies , Humans , Odds Ratio , Retrospective Studies
14.
Article in English | MEDLINE | ID: mdl-35463194

ABSTRACT

Hypertrophic Cardiomyopathy (HCM) is the most common genetic heart disease in the US and is known to cause sudden death (SCD) in young adults. While significant advancements have been made in HCM diagnosis and management, there is a need to identify HCM cases from electronic health record (EHR) data to develop automated tools based on natural language processing guided machine learning (ML) models for accurate HCM case identification to improve management and reduce adverse outcomes of HCM patients. Cardiac Magnetic Resonance (CMR) Imaging, plays a significant role in HCM diagnosis and risk stratification. CMR reports, generated by clinician annotation, offer rich data in the form of cardiac measurements as well as narratives describing interpretation and phenotypic description. The purpose of this study is to develop an NLP-based interpretable model utilizing impressions extracted from CMR reports to automatically identify HCM patients. CMR reports of patients with suspected HCM diagnosis between the years 1995 to 2019 were used in this study. Patients were classified into three categories of yes HCM, no HCM and, possible HCM. A random forest (RF) model was developed to predict the performance of both CMR measurements and impression features to identify HCM patients. The RF model yielded an accuracy of 86% (608 features) and 85% (30 features). These results offer promise for accurate identification of HCM patients using CMR reports from EHR for efficient clinical management transforming health care delivery for these patients.

15.
Annu Int Conf IEEE Eng Med Biol Soc ; 2020: 6070-6073, 2020 07.
Article in English | MEDLINE | ID: mdl-33019355

ABSTRACT

Increasing workload is one of the main problems that surgical practices face. This increase is not only due to the increasing demand volume but also due to increasing case complexity. This raises the question on how to measure and predict the complexity to address this issue. Predicting surgical duration is critical to parametrize surgical complexity, improve surgeon satisfaction by avoiding unexpected overtime, and improve operation room utilization. Our objective is to utilize the historical data on surgical operations to obtain complexity groups and use this groups to improve practice.Our study first leverages expert opinion on the surgical complexity to identify surgical groups. Then, we use a tree-based method on a large retrospective dataset to identify similar complexity groups by utilizing the surgical features and using surgical duration as a response variable. After obtaining the surgical groups by using two methods, we statistically compare expert-based grouping with the data-based grouping. This comparison shows that a tree-based method can provide complexity groups similar to the ones generated by an expert by using features that are available at the time of surgical listing. These results suggest that one can take advantage of available data to provide surgical duration predictions that are data-driven, evidence-based, and practically relevant.


Subject(s)
Breast Neoplasms , Surgeons , Databases, Factual , Humans , Retrospective Studies , Workload
18.
IEEE J Biomed Health Inform ; 24(10): 3029-3037, 2020 10.
Article in English | MEDLINE | ID: mdl-32750911

ABSTRACT

Hospital emergency department (ED) operations are affected when critically ill or injured patients arrive. Such events often lead to the initiation of specific protocols, referred to as Resuscitation-team Activation (RA), in the ED of Mayo Clinic, Rochester, MN where this study was conducted. RA events lead to the diversion of resources from other patients in the ED to provide care to critically ill patients; therefore, it has an impact on the entire ED system. This paper presents a data-driven and flexible statistical learning model to quantify the impact of RA on the ED. The model learns the pattern of operations in the ED from historical patient arrival and departure timestamps and quantifies the impact of RA by measuring the deviation of the departure of patients during RA from normal processes. The proposed method significantly outperforms baseline methods based on measuring the average time patients spend in the ED.


Subject(s)
Critical Illness/therapy , Emergency Service, Hospital/statistics & numerical data , Hospital Rapid Response Team/statistics & numerical data , Models, Statistical , Resuscitation , Humans , Time Factors
19.
Emerg Med J ; 37(9): 552-554, 2020 Sep.
Article in English | MEDLINE | ID: mdl-32571784

ABSTRACT

BACKGROUND: Emergency department (ED) operations leaders are under increasing pressure to make care delivery more efficient. Publicly reported ED efficiency metrics are traditionally patient centred and do not show situational or facility-based improvement opportunities. We propose the consideration of a novel metric, the 'Number of Unnecessary Waits (NUW)' and the corresponding 'Unnecessary Wait Hours (UWH)', to measure space efficiency, and we describe how we used NUW to evaluate operational changes in our ED. METHODS: UWH summarises the relationship between the number of available rooms and the number of patients waiting by returning a value equal to the number of unnecessary patient waits. We used this metric to evaluate reassigning a clinical technician assistant (CTA) to the new role of flow CTA. RESULTS: We retrospectively analysed 3.5 months of data from before and after creation of the flow CTA. NUW metric analysis suggested that the flow CTA decreased the amount of unnecessary wait hours, while higher patient volumes had the opposite effect. CONCLUSIONS: Situational system-level metrics may provide a new dimension to evaluating ED operational efficiencies. Studies focussed on system-level metrics to evaluate an ED practice are needed to understand the role these metrics play in evaluation of a department's operations.


Subject(s)
Efficiency, Organizational/statistics & numerical data , Emergency Service, Hospital/organization & administration , Waiting Lists , Bed Occupancy/statistics & numerical data , Humans , Minnesota
20.
Brachytherapy ; 19(4): 518-531, 2020.
Article in English | MEDLINE | ID: mdl-32423786

ABSTRACT

PURPOSE: A Pareto Navigation and Visualization (PNaV) tool is presented for interactively constructing a high-dose-rate (HDR) brachytherapy treatment plan by navigating and visualizing the multidimensional Pareto surface. PNaV aims to improve treatment planning time and quality and is generalizable to any number of dose-volume histogram (DVH) and convex dose metrics. METHODS AND MATERIALS: Pareto surface visualization and navigation were demonstrated for prostate, breast, and cervix HDR brachytherapy sites. A library of treatment plans was created to span the Pareto surfaces over a 30% range of doses in each of five DVH metrics. The PNaV method, which uses a nonnegative least-squares model to interpolate the library plans, was compared against pure optimization for 11,250 navigated plans using data envelopment analysis. The visualization of the metric trade-offs was accomplished using numerically estimated partial derivatives to plot the local curvature of the Pareto surface. PNaV enables the user to control both the magnitude and direction of the trade-off during navigation. RESULTS: Proof of principle of PNaV was demonstrated using a graphical user interface with visualization tools to enabled rapid plan selection and a quantitative review of metric trade-offs. PNaV produced deliverable plans with DVH metrics within < 0.4%, 0.6%, and 1.1% (95% confidence interval) of the Pareto surface using plan libraries with nominal plan spacing of 10%, 15%, and 30% in each metric dimension, respectively. The interpolation used for the navigation executed in 0.1 s. The fast interpolation allows for quick and efficient exploration of trade-off options by the physician, after an initial preprocessing step to generate the library. CONCLUSIONS: Generation, visualization, and navigation of the Pareto surface were validated for brachytherapy treatment planning. The PNaV method enables efficient and informed decision-making for radiotherapy.


Subject(s)
Brachytherapy , Breast Neoplasms/radiotherapy , Prostatic Neoplasms/radiotherapy , Radiotherapy Planning, Computer-Assisted/methods , Uterine Cervical Neoplasms/radiotherapy , Algorithms , Female , Humans , Male , Mathematical Concepts , Radiotherapy Dosage
SELECTION OF CITATIONS
SEARCH DETAIL
...