Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 170
Filter
1.
AMIA Jt Summits Transl Sci Proc ; 2024: 276-284, 2024.
Article in English | MEDLINE | ID: mdl-38827056

ABSTRACT

OBJECTIVES: To automatically populate the case report forms (CRFs) for an international, pragmatic, multifactorial, response-adaptive, Bayesian COVID-19 platform trial. METHODS: The locations of focus included 27 hospitals and 2 large electronic health record (EHR) instances (1 Cerner Millennium and 1 Epic) that are part of the same health system in the United States. This paper describes our efforts to use EHR data to automatically populate four of the trial's forms: baseline, daily, discharge, and response-adaptive randomization. RESULTS: Between April 2020 and May 2022, 417 patients from the UPMC health system were enrolled in the trial. A MySQL-based extract, transform, and load pipeline automatically populated 499 of 526 CRF variables. The populated forms were statistically and manually reviewed and then reported to the trial's international data coordinating center. CONCLUSIONS: We accomplished automatic population of CRFs in a large platform trial and made recommendations for improving this process for future trials.

2.
Article in English | MEDLINE | ID: mdl-38559667

ABSTRACT

Sepsis is a major public health emergency and one of the leading causes of morbidity and mortality in critically ill patients. For each hour treatment is delayed, shock-related mortality increases, so early diagnosis and intervention is of utmost importance. However, earlier recognition of shock requires active monitoring, which may be delayed due to subclinical manifestations of the disease at the early phase of onset. Machine learning systems can increase timely detection of shock onset by exploiting complex interactions among continuous physiological waveforms. We use a dataset consisting of high-resolution physiological waveforms from intensive care unit (ICU) of a tertiary hospital system. We investigate the use of mean arterial blood pressure (MAP), pulse arrival time (PAT), heart rate variability (HRV), and heart rate (HR) for the early prediction of shock onset. Using only five minutes of the aforementioned vital signals from 239 ICU patients, our developed models can accurately predict septic shock onset 6 to 36 hours prior to clinical recognition with area under the receiver operating characteristic (AUROC) of 0.84 and 0.8 respectively. This work lays foundations for a robust, efficient, accurate and early prediction of septic shock onset which may help clinicians in their decision-making processes. This study introduces machine learning models that provide fast and accurate predictions of septic shock onset times up to 36 hours in advance. BP, PAT and HR dynamics can independently predict septic shock onset with a look-back period of only 5 mins.

3.
Crit Care ; 28(1): 113, 2024 04 08.
Article in English | MEDLINE | ID: mdl-38589940

ABSTRACT

BACKGROUND: Perhaps nowhere else in the healthcare system than in the intensive care unit environment are the challenges to create useful models with direct time-critical clinical applications more relevant and the obstacles to achieving those goals more massive. Machine learning-based artificial intelligence (AI) techniques to define states and predict future events are commonplace activities of modern life. However, their penetration into acute care medicine has been slow, stuttering and uneven. Major obstacles to widespread effective application of AI approaches to the real-time care of the critically ill patient exist and need to be addressed. MAIN BODY: Clinical decision support systems (CDSSs) in acute and critical care environments support clinicians, not replace them at the bedside. As will be discussed in this review, the reasons are many and include the immaturity of AI-based systems to have situational awareness, the fundamental bias in many large databases that do not reflect the target population of patient being treated making fairness an important issue to address and technical barriers to the timely access to valid data and its display in a fashion useful for clinical workflow. The inherent "black-box" nature of many predictive algorithms and CDSS makes trustworthiness and acceptance by the medical community difficult. Logistically, collating and curating in real-time multidimensional data streams of various sources needed to inform the algorithms and ultimately display relevant clinical decisions support format that adapt to individual patient responses and signatures represent the efferent limb of these systems and is often ignored during initial validation efforts. Similarly, legal and commercial barriers to the access to many existing clinical databases limit studies to address fairness and generalizability of predictive models and management tools. CONCLUSIONS: AI-based CDSS are evolving and are here to stay. It is our obligation to be good shepherds of their use and further development.


Subject(s)
Algorithms , Artificial Intelligence , Humans , Critical Care , Intensive Care Units , Delivery of Health Care
4.
Crit Care Explor ; 6(4): e1073, 2024 Apr.
Article in English | MEDLINE | ID: mdl-38545607

ABSTRACT

OBJECTIVES: Early signs of bleeding are often masked by the physiologic compensatory responses delaying its identification. We sought to describe early physiologic signatures of bleeding during the blood donation process. SETTING: Waveform-level vital sign data including electrocardiography, photoplethysmography (PPG), continuous noninvasive arterial pressure, and respiratory waveforms were collected before, during, and after bleeding. SUBJECTS: Fifty-five healthy volunteers visited blood donation center to donate whole blood. INTERVENTION: After obtaining the informed consent, 3 minutes of resting time was given to each subject. Then 3 minutes of orthostasis was done, followed by another 3 minutes of resting before the blood donation. After the completion of donating blood, another 3 minutes of postbleeding resting time, followed by 3 minutes of orthostasis period again. MEASUREMENTS AND MAIN RESULTS: From 55 subjects, waveform signals as well as numerical vital signs (heart rate [HR], respiratory rate, blood pressure) and clinical characteristics were collected, and data from 51 subjects were analyzable. Any adverse events (AEs; dizziness, lightheadedness, nausea) were documented. Statistical and physiologic features including HR variability (HRV) metrics and other waveform morphologic parameters were modeled. Feature trends for all participants across the study protocol were analyzed. No significant changes in HR, blood pressure, or estimated cardiac output were seen during bleeding. Both orthostatic challenges and bleeding significantly decreased time domain and high-frequency domain HRV, and PPG amplitude, whereas increasing PPG amplitude variation. During bleeding, time-domain HRV feature trends were most sensitive to the first 100 mL of blood loss, and incremental changes of different HRV parameters (from 300 mL of blood loss), as well as a PPG morphologic feature (from 400 mL of blood loss), were shown with statistical significance. The AE group (n = 6) showed decreased sample entropy compared with the non-AE group during postbleed orthostatic challenge (p = 0.003). No significant other trend differences were observed during bleeding between AE and non-AE groups. CONCLUSIONS: Various HRV-related features were changed during rapid bleeding seen within the first minute. Subjects with AE during postbleeding orthostasis showed decreased sample entropy. These findings could be leveraged toward earlier identification of donors at risk for AE, and more broadly building a data-driven hemorrhage model for the early treatment of critical bleeding.

5.
Shock ; 61(1): 76-82, 2024 Jan 01.
Article in English | MEDLINE | ID: mdl-38010054

ABSTRACT

ABSTRACT: Objective: To investigate whether pediatric sepsis phenotypes are stable in time. Methods: Retrospective cohort study examining children with suspected sepsis admitted to a Pediatric Intensive Care Unit at a large freestanding children's hospital during two distinct periods: 2010-2014 (early cohort) and 2018-2020 (late cohort). K-means consensus clustering was used to derive types separately in the cohorts. Variables included ensured representation of all organ systems. Results: One thousand ninety-one subjects were in the early cohort and 737 subjects in the late cohort. Clustering analysis yielded four phenotypes in the early cohort and five in the late cohort. Four types were in both: type A (34% of early cohort, 25% of late cohort), mild sepsis, with minimal organ dysfunction and low mortality; type B (25%, 22%), primary respiratory failure; type C (25%, 18%), liver dysfunction, coagulopathy, and higher measures of systemic inflammation; type D (16%, 17%), severe multiorgan dysfunction, with high degrees of cardiorespiratory support, renal dysfunction, and highest mortality. Type E was only detected in the late cohort (19%) and was notable for respiratory failure less severe than B or D, mild hypothermia, and high proportion of diagnoses and technological dependence associated with medical complexity. Despite low mortality, this type had the longest PICU length of stay. Conclusions: This single center study identified four pediatric sepsis phenotypes in an earlier epoch but five in a later epoch, with the new type having a large proportion of characteristics associated with medical complexity, particularly technology dependence. Personalized sepsis therapies need to account for this expanding patient population.


Subject(s)
Respiratory Insufficiency , Sepsis , Child , Humans , Retrospective Studies , Hospital Mortality , Sepsis/therapy , Phenotype , Intensive Care Units, Pediatric , Hospitals, Pediatric
6.
Clin Infect Dis ; 78(4): 1011-1021, 2024 Apr 10.
Article in English | MEDLINE | ID: mdl-37889515

ABSTRACT

BACKGROUND: Identification of bloodstream infection (BSI) in transplant recipients may be difficult due to immunosuppression. Accordingly, we aimed to compare responses to BSI in critically ill transplant and non-transplant recipients and to modify systemic inflammatory response syndrome (SIRS) criteria for transplant recipients. METHODS: We analyzed univariate risks and developed multivariable models of BSI with 27 clinical variables from adult intensive care unit (ICU) patients at the University of Virginia (UVA) and at the University of Pittsburgh (Pitt). We used Bayesian inference to adjust SIRS criteria for transplant recipients. RESULTS: We analyzed 38.7 million hourly measurements from 41 725 patients at UVA, including 1897 transplant recipients with 193 episodes of BSI and 53 608 patients at Pitt, including 1614 transplant recipients with 768 episodes of BSI. The univariate responses to BSI were comparable in transplant and non-transplant recipients. The area under the receiver operating characteristic curve (AUC) was 0.82 (95% confidence interval [CI], .80-.83) for the model using all UVA patient data and 0.80 (95% CI, .76-.83) when using only transplant recipient data. The UVA all-patient model had an AUC of 0.77 (95% CI, .76-.79) in non-transplant recipients and 0.75 (95% CI, .71-.79) in transplant recipients at Pitt. The relative importance of the 27 predictors was similar in transplant and non-transplant models. An upper temperature of 37.5°C in SIRS criteria improved reclassification performance in transplant recipients. CONCLUSIONS: Critically ill transplant and non-transplant recipients had similar responses to BSI. An upper temperature of 37.5°C in SIRS criteria improved BSI screening in transplant recipients.


Subject(s)
Bacteremia , Sepsis , Adult , Humans , Transplant Recipients , Critical Illness , Bayes Theorem , Bacteremia/epidemiology , Bacteremia/diagnosis , Systemic Inflammatory Response Syndrome/diagnosis , Systemic Inflammatory Response Syndrome/epidemiology , Retrospective Studies
7.
J Electrocardiol ; 81: 253-257, 2023.
Article in English | MEDLINE | ID: mdl-37883866

ABSTRACT

Despite significant advances in modeling methods and access to large datasets, there are very few real-time forecasting systems deployed in highly monitored environment such as the intensive care unit. Forecasting models may be developed as classification, regression or time-to-event tasks; each could be using a variety of machine learning algorithms. An accurate and useful forecasting systems include several components beyond a forecasting model, and its performance is assessed using end-user-centered metrics. Several barriers to implementation and acceptance persist and clinicians will play an active role in the successful deployment of this promising technology.


Subject(s)
Algorithms , Electrocardiography , Humans , Forecasting , Machine Learning , Intensive Care Units
8.
Appl Clin Inform ; 14(4): 789-802, 2023 08.
Article in English | MEDLINE | ID: mdl-37793618

ABSTRACT

BACKGROUND: Critical instability forecast and treatment can be optimized by artificial intelligence (AI)-enabled clinical decision support. It is important that the user-facing display of AI output facilitates clinical thinking and workflow for all disciplines involved in bedside care. OBJECTIVES: Our objective is to engage multidisciplinary users (physicians, nurse practitioners, physician assistants) in the development of a graphical user interface (GUI) to present an AI-derived risk score. METHODS: Intensive care unit (ICU) clinicians participated in focus groups seeking input on instability risk forecast presented in a prototype GUI. Two stratified rounds (three focus groups [only nurses, only providers, then combined]) were moderated by a focus group methodologist. After round 1, GUI design changes were made and presented in round 2. Focus groups were recorded, transcribed, and deidentified transcripts independently coded by three researchers. Codes were coalesced into emerging themes. RESULTS: Twenty-three ICU clinicians participated (11 nurses, 12 medical providers [3 mid-level and 9 physicians]). Six themes emerged: (1) analytics transparency, (2) graphical interpretability, (3) impact on practice, (4) value of trend synthesis of dynamic patient data, (5) decisional weight (weighing AI output during decision-making), and (6) display location (usability, concerns for patient/family GUI view). Nurses emphasized having GUI objective information to support communication and optimal GUI location. While providers emphasized need for recommendation interpretability and concern for impairing trainee critical thinking. All disciplines valued synthesized views of vital signs, interventions, and risk trends but were skeptical of placing decisional weight on AI output until proven trustworthy. CONCLUSION: Gaining input from all clinical users is important to consider when designing AI-derived GUIs. Results highlight that health care intelligent decisional support systems technologies need to be transparent on how they work, easy to read and interpret, cause little disruption to current workflow, as well as decisional support components need to be used as an adjunct to human decision-making.


Subject(s)
Artificial Intelligence , Decision Support Systems, Clinical , Humans , Intensive Care Units , Focus Groups , Decision Making
9.
Crit Care Clin ; 39(4): 689-700, 2023 Oct.
Article in English | MEDLINE | ID: mdl-37704334

ABSTRACT

Electronic medical records (EMRs) constitute the electronic version of all medical information included in a patient's paper chart. The electronic health record (EHR) technology has witnessed massive expansion in developed countries and to a lesser extent in underresourced countries during the last 2 decades. We will review factors leading to this expansion, how the emergence of EHRs is affecting several health-care stakeholders; some of the growing pains associated with EHRs with a particular emphasis on the delivery of care to the critically ill; and ongoing developments on the path to improve the quality of research, health-care delivery, and stakeholder satisfaction.


Subject(s)
Electronic Health Records , Humans
10.
J Electrocardiol ; 81: 111-116, 2023.
Article in English | MEDLINE | ID: mdl-37683575

ABSTRACT

BACKGROUND: Despite the morbidity associated with acute atrial fibrillation (AF), no models currently exist to forecast its imminent onset. We sought to evaluate the ability of deep learning to forecast the imminent onset of AF with sufficient lead time, which has important implications for inpatient care. METHODS: We utilized the Physiobank Long-Term AF Database, which contains 24-h, labeled ECG recordings from patients with a history of AF. AF episodes were defined as ≥5 min of sustained AF. Three deep learning models incorporating convolutional and transformer layers were created for forecasting, with two models focusing on the predictive nature of sinus rhythm segments and AF epochs separately preceding an AF episode, and one model utilizing all preceding waveform as input. Cross-validated performance was evaluated using area under time-dependent receiver operating characteristic curves (AUC(t)) at 7.5-, 15-, 30-, and 60-min lead times, precision-recall curves, and imminent AF risk trajectories. RESULTS: There were 367 AF episodes from 84 ECG recordings. All models showed average risk trajectory divergence of those with an AF episode from those without ∼15 min before the episode. Highest AUC was associated with the sinus rhythm model [AUC = 0.74; 7.5-min lead time], though the model using all preceding waveform data had similar performance and higher AUCs at longer lead times. CONCLUSIONS: In this proof-of-concept study, we demonstrated the potential utility of neural networks to forecast the onset of AF in long-term ECG recordings with a clinically relevant lead time. External validation in larger cohorts is required before deploying these models clinically.


Subject(s)
Atrial Fibrillation , Humans , Atrial Fibrillation/diagnosis , Electrocardiography , Neural Networks, Computer , ROC Curve , Time Factors
11.
Nat Med ; 29(7): 1804-1813, 2023 07.
Article in English | MEDLINE | ID: mdl-37386246

ABSTRACT

Patients with occlusion myocardial infarction (OMI) and no ST-elevation on presenting electrocardiogram (ECG) are increasing in numbers. These patients have a poor prognosis and would benefit from immediate reperfusion therapy, but, currently, there are no accurate tools to identify them during initial triage. Here we report, to our knowledge, the first observational cohort study to develop machine learning models for the ECG diagnosis of OMI. Using 7,313 consecutive patients from multiple clinical sites, we derived and externally validated an intelligent model that outperformed practicing clinicians and other widely used commercial interpretation systems, substantially boosting both precision and sensitivity. Our derived OMI risk score provided enhanced rule-in and rule-out accuracy relevant to routine care, and, when combined with the clinical judgment of trained emergency personnel, it helped correctly reclassify one in three patients with chest pain. ECG features driving our models were validated by clinical experts, providing plausible mechanistic links to myocardial injury.


Subject(s)
Emergency Service, Hospital , Myocardial Infarction , Humans , Time Factors , Myocardial Infarction/diagnosis , Electrocardiography , Risk Assessment
12.
Res Sq ; 2023 Jan 30.
Article in English | MEDLINE | ID: mdl-36778371

ABSTRACT

Patients with occlusion myocardial infarction (OMI) and no ST-elevation on presenting ECG are increasing in numbers. These patients have a poor prognosis and would benefit from immediate reperfusion therapy, but we currently have no accurate tools to identify them during initial triage. Herein, we report the first observational cohort study to develop machine learning models for the ECG diagnosis of OMI. Using 7,313 consecutive patients from multiple clinical sites, we derived and externally validated an intelligent model that outperformed practicing clinicians and other widely used commercial interpretation systems, significantly boosting both precision and sensitivity. Our derived OMI risk score provided superior rule-in and rule-out accuracy compared to routine care, and when combined with the clinical judgment of trained emergency personnel, this score helped correctly reclassify one in three patients with chest pain. ECG features driving our models were validated by clinical experts, providing plausible mechanistic links to myocardial injury.

13.
J Electrocardiol ; 76: 35-38, 2023.
Article in English | MEDLINE | ID: mdl-36434848

ABSTRACT

The idea that we can detect subacute potentially catastrophic illness earlier by using statistical models trained on clinical data is now well-established. We review evidence that supports the role of continuous cardiorespiratory monitoring in these predictive analytics monitoring tools. In particular, we review how continuous ECG monitoring reflects the patient and not the clinician, is less likely to be biased, is unaffected by changes in practice patterns, captures signatures of illnesses that are interpretable by clinicians, and is an underappreciated and underutilized source of detailed information for new mathematical methods to reveal.


Subject(s)
Clinical Deterioration , Electrocardiography , Humans , Electrocardiography/methods , Monitoring, Physiologic , Models, Statistical , Artificial Intelligence
14.
J Clin Med ; 11(18)2022 Sep 08.
Article in English | MEDLINE | ID: mdl-36142936

ABSTRACT

Background: General severity of illness scores are not well calibrated to predict mortality among patients receiving renal replacement therapy (RRT) for acute kidney injury (AKI). We developed machine learning models to make mortality prediction and compared their performance to that of the Sequential Organ Failure Assessment (SOFA) and HEpatic failure, LactatE, NorepInephrine, medical Condition, and Creatinine (HELENICC) scores. Methods: We extracted routinely collected clinical data for AKI patients requiring RRT in the MIMIC and eICU databases. The development models were trained in 80% of the pooled dataset and tested in the rest of the pooled dataset. We compared the area under the receiver operating characteristic curves (AUCs) of four machine learning models (multilayer perceptron [MLP], logistic regression, XGBoost, and random forest [RF]) to that of the SOFA, nonrenal SOFA, and HELENICC scores and assessed calibration, sensitivity, specificity, positive (PPV) and negative (NPV) predicted values, and accuracy. Results: The mortality AUC of machine learning models was highest for XGBoost (0.823; 95% confidence interval [CI], 0.791−0.854) in the testing dataset, and it had the highest accuracy (0.758). The XGBoost model showed no evidence of lack of fit with the Hosmer−Lemeshow test (p > 0.05). Conclusion: XGBoost provided the highest performance of mortality prediction for patients with AKI requiring RRT compared with previous scoring systems.

15.
Shock ; 58(4): 260-268, 2022 10 01.
Article in English | MEDLINE | ID: mdl-36018286

ABSTRACT

ABSTRACT: Objective: To examine the risk factors, resource utilization, and 1-year mortality associated with vasopressor-resistant hypotension (VRH) compared with vasopressor-sensitive hypotension (VSH) among critically ill adults with vasodilatory shock. We also examined whether combination vasopressor therapy and patient phenotype were associated with mortality. Design: Retrospective cohort study. Setting: Eight medical-surgical intensive care units at the University of Pittsburgh Medical Center, Pittsburgh, PA. Patients : Critically ill patients with vasodilatory shock admitted between July 2000 and October 2008. Interventions : None. Measurements and Main Results: Vasopressor-resistant hypotension was defined as those requiring greater than 0.2 µg/kg per minute of norepinephrine equivalent dose of vasopressor consecutively for more than 6 h, and VSH was defined as patients requiring ≤0.2 µg/kg per minute to maintain MAP between 55 and 70 mm Hg after adequate fluid resuscitation. Of 5,313 patients with vasodilatory shock, 1,291 patients (24.3%) developed VRH. Compared with VSH, VRH was associated with increased risk of acute kidney injury (72.7% vs. 65.0%; P < 0.001), use of kidney replacement therapy (26.0% vs. 11.0%; P < 0.001), longer median (interquartile range [IQR]) intensive care unit length of stay (10 [IQR, 4.0-20.0] vs. 6 [IQR, 3.0-13.0] days; P < 0.001), and increased 1-year mortality (64.7% vs. 34.8%; P < 0.001). Vasopressor-resistant hypotension was associated with increased odds of risk-adjusted mortality (adjusted odds ratio [aOR], 2.93; 95% confidence interval [CI], 2.52-3.40; P < 0.001). When compared with monotherapy, combination vasopressor therapy with two (aOR, 0.91; 95% CI, 0.78-1.06) and three or more vasopressors was not associated with lower mortality (aOR, 0.93; 95% CI, 0.68-1.27). Using a finite mixture model, we identified four unique phenotypes of patient clusters that differed with respect to demographics, severity of illness, processes of care, vasopressor use, and outcomes. Conclusions: Among critically ill patients with vasodilatory shock, VRH compared with VSH is associated with increased resource utilization and long-term risk of death. However, combination vasopressor therapy was not associated with lower risk of death. We identified four unique phenotypes of patient clusters that require further validation.


Subject(s)
Hypotension , Shock , Humans , Critical Illness/therapy , Retrospective Studies , Vasoconstrictor Agents/therapeutic use , Hypotension/etiology , Shock/complications , Norepinephrine/therapeutic use , Phenotype
16.
Neurocrit Care ; 37(Suppl 2): 276-290, 2022 08.
Article in English | MEDLINE | ID: mdl-35689135

ABSTRACT

BACKGROUND: We evaluated the feasibility and discriminability of recently proposed Clinical Performance Measures for Neurocritical Care (Neurocritical Care Society) and Quality Indicators for Traumatic Brain Injury (Collaborative European NeuroTrauma Effectiveness Research in TBI; CENTER-TBI) extracted from electronic health record (EHR) flowsheet data. METHODS: At three centers within the Collaborative Hospital Repository Uniting Standards (CHoRUS) for Equitable AI consortium, we examined consecutive neurocritical care admissions exceeding 24 h (03/2015-02/2020) and evaluated the feasibility, discriminability, and site-specific variation of five clinical performance measures and quality indicators: (1) intracranial pressure (ICP) monitoring (ICPM) within 24 h when indicated, (2) ICPM latency when initiated within 24 h, (3) frequency of nurse-documented neurologic assessments, (4) intermittent pneumatic compression device (IPCd) initiation within 24 h, and (5) latency to IPCd application. We additionally explored associations between delayed IPCd initiation and codes for venous thromboembolism documented using the 10th revision of the International Statistical Classification of Diseases and Related Health Problems (ICD-10) system. Median (interquartile range) statistics are reported. Kruskal-Wallis tests were measured for differences across centers, and Dunn statistics were reported for between-center differences. RESULTS: A total of 14,985 admissions met inclusion criteria. ICPM was documented in 1514 (10.1%), neurologic assessments in 14,635 (91.1%), and IPCd application in 14,175 (88.5%). ICPM began within 24 h for 1267 (83.7%), with site-specific latency differences among sites 1-3, respectively, (0.54 h [2.82], 0.58 h [1.68], and 2.36 h [4.60]; p < 0.001). The frequency of nurse-documented neurologic assessments also varied by site (17.4 per day [5.97], 8.4 per day [3.12], and 15.3 per day [8.34]; p < 0.001) and diurnally (6.90 per day during daytime hours vs. 5.67 per day at night, p < 0.001). IPCds were applied within 24 h for 12,863 (90.7%) patients meeting clinical eligibility (excluding those with EHR documentation of limiting injuries, actively documented as ambulating, or refusing prophylaxis). In-hospital venous thromboembolism varied by site (1.23%, 1.55%, and 5.18%; p < 0.001) and was associated with increased IPCd latency (overall, 1.02 h [10.4] vs. 0.97 h [5.98], p = 0.479; site 1, 2.25 h [10.27] vs. 1.82 h [7.39], p = 0.713; site 2, 1.38 h [5.90] vs. 0.80 h [0.53], p = 0.216; site 3, 0.40 h [16.3] vs. 0.35 h [11.5], p = 0.036). CONCLUSIONS: Electronic health record-derived reporting of neurocritical care performance measures is feasible and demonstrates site-specific variation. Future efforts should examine whether performance or documentation drives these measures, what outcomes are associated with performance, and whether EHR-derived measures of performance measures and quality indicators are modifiable.


Subject(s)
Brain Injuries, Traumatic , Venous Thromboembolism , Brain Injuries, Traumatic/therapy , Electronic Health Records , Hospitals , Humans , Intermittent Pneumatic Compression Devices , Pilot Projects
17.
Front Med (Lausanne) ; 9: 794423, 2022.
Article in English | MEDLINE | ID: mdl-35665340

ABSTRACT

Introduction: Targeted therapies for sepsis have failed to show benefit due to high variability among subjects. We sought to demonstrate different phenotypes of septic shock based solely on clinical features and show that these relate to outcome. Methods: A retrospective analysis was performed of a 1,023-subject cohort with early septic shock from the ProCESS trial. Twenty-three clinical variables at baseline were analyzed using hierarchical clustering, with consensus clustering used to identify and validate the ideal number of clusters in a derivation cohort of 642 subjects from 20 hospitals. Clusters were visualized using heatmaps over 0, 6, 24, and 72 h. Clinical outcomes were 14-day all-cause mortality and organ failure pattern. Cluster robustness was confirmed in a validation cohort of 381 subjects from 11 hospitals. Results: Five phenotypes were identified, each with unique organ failure patterns that persisted in time. By enrollment criteria, all patients had shock. The two high-risk phenotypes were characterized by distinct multi-organ failure patterns and cytokine signatures, with the highest mortality group characterized most notably by liver dysfunction and coagulopathy while the other group exhibited primarily respiratory failure, neurologic dysfunction, and renal dysfunction. The moderate risk phenotype was that of respiratory failure, while low-risk phenotypes did not have a high degree of additional organ failure. Conclusions: Sepsis phenotypes with distinct biochemical abnormalities may be identified by clinical characteristics alone and likely provide an opportunity for early clinical actionability and prognosis.

18.
Crit Care ; 26(1): 75, 2022 03 22.
Article in English | MEDLINE | ID: mdl-35337366

ABSTRACT

This article is one of ten reviews selected from the Annual Update in Intensive Care and Emergency Medicine 2022. Other selected articles can be found online at https://www.biomedcentral.com/collections/annualupdate2022 . Further information about the Annual Update in Intensive Care and Emergency Medicine is available from https://link.springer.com/bookseries/8901 .


Subject(s)
Artificial Intelligence , Emergency Medicine , Critical Care , Humans
19.
Sensors (Basel) ; 22(4)2022 Feb 12.
Article in English | MEDLINE | ID: mdl-35214310

ABSTRACT

Early recognition of pathologic cardiorespiratory stress and forecasting cardiorespiratory decompensation in the critically ill is difficult even in highly monitored patients in the Intensive Care Unit (ICU). Instability can be intuitively defined as the overt manifestation of the failure of the host to adequately respond to cardiorespiratory stress. The enormous volume of patient data available in ICU environments, both of high-frequency numeric and waveform data accessible from bedside monitors, plus Electronic Health Record (EHR) data, presents a platform ripe for Artificial Intelligence (AI) approaches for the detection and forecasting of instability, and data-driven intelligent clinical decision support (CDS). Building unbiased, reliable, and usable AI-based systems across health care sites is rapidly becoming a high priority, specifically as these systems relate to diagnostics, forecasting, and bedside clinical decision support. The ICU environment is particularly well-positioned to demonstrate the value of AI in saving lives. The goal is to create AI models embedded in a real-time CDS for forecasting and mitigation of critical instability in ICU patients of sufficient readiness to be deployed at the bedside. Such a system must leverage multi-source patient data, machine learning, systems engineering, and human action expertise, the latter being key to successful CDS implementation in the clinical workflow and evaluation of bias. We present one approach to create an operationally relevant AI-based forecasting CDS system.


Subject(s)
Decision Support Systems, Clinical , Artificial Intelligence , Critical Care , Humans , Intensive Care Units , Machine Learning
20.
Int J Med Inform ; 159: 104643, 2022 03.
Article in English | MEDLINE | ID: mdl-34973608

ABSTRACT

BACKGROUND: Artificial Intelligence (AI) is increasingly used to support bedside clinical decisions, but information must be presented in usable ways within workflow. Graphical User Interfaces (GUI) are front-facing presentations for communicating AI outputs, but clinicians are not routinely invited to participate in their design, hindering AI solution potential. PURPOSE: To inform early user-engaged design of a GUI prototype aimed at predicting future Cardiorespiratory Insufficiency (CRI) by exploring clinician methods for identifying at-risk patients, previous experience with implementing new technologies into clinical workflow, and user perspectives on GUI screen changes. METHODS: We conducted a qualitative focus group study to elicit iterative design feedback from clinical end-users on an early GUI prototype display. Five online focus group sessions were held, each moderated by an expert focus group methodologist. Iterative design changes were made sequentially, and the updated GUI display was presented to the next group of participants. RESULTS: 23 clinicians were recruited (14 nurses, 4 nurse practitioners, 5 physicians; median participant age ∼35 years; 60% female; median clinical experience 8 years). Five themes emerged from thematic content analysis: trend evolution, context (risk evolution relative to vital signs and interventions), evaluation/interpretation/explanation (sub theme: continuity of evaluation), clinician intuition, and clinical operations. Based on these themes, GUI display changes were made. For example, color and scale adjustments, integration of clinical information, and threshold personalization. CONCLUSIONS: Early user-engaged design was useful in adjusting GUI presentation of AI output. Next steps involve clinical testing and further design modification of the AI output to optimally facilitate clinician surveillance and decisions. Clinicians should be involved early and often in clinical decision support design to optimize efficacy of AI tools.


Subject(s)
Decision Support Systems, Clinical , Physicians , Adult , Artificial Intelligence , Delivery of Health Care , Female , Humans , Male , Workflow
SELECTION OF CITATIONS
SEARCH DETAIL
...