Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 19 de 19
Filter
1.
JAMA Netw Open ; 7(7): e2422399, 2024 Jul 01.
Article in English | MEDLINE | ID: mdl-39012633

ABSTRACT

Importance: Virtual patient-physician communications have increased since 2020 and negatively impacted primary care physician (PCP) well-being. Generative artificial intelligence (GenAI) drafts of patient messages could potentially reduce health care professional (HCP) workload and improve communication quality, but only if the drafts are considered useful. Objectives: To assess PCPs' perceptions of GenAI drafts and to examine linguistic characteristics associated with equity and perceived empathy. Design, Setting, and Participants: This cross-sectional quality improvement study tested the hypothesis that PCPs' ratings of GenAI drafts (created using the electronic health record [EHR] standard prompts) would be equivalent to HCP-generated responses on 3 dimensions. The study was conducted at NYU Langone Health using private patient-HCP communications at 3 internal medicine practices piloting GenAI. Exposures: Randomly assigned patient messages coupled with either an HCP message or the draft GenAI response. Main Outcomes and Measures: PCPs rated responses' information content quality (eg, relevance), using a Likert scale, communication quality (eg, verbosity), using a Likert scale, and whether they would use the draft or start anew (usable vs unusable). Branching logic further probed for empathy, personalization, and professionalism of responses. Computational linguistics methods assessed content differences in HCP vs GenAI responses, focusing on equity and empathy. Results: A total of 16 PCPs (8 [50.0%] female) reviewed 344 messages (175 GenAI drafted; 169 HCP drafted). Both GenAI and HCP responses were rated favorably. GenAI responses were rated higher for communication style than HCP responses (mean [SD], 3.70 [1.15] vs 3.38 [1.20]; P = .01, U = 12 568.5) but were similar to HCPs on information content (mean [SD], 3.53 [1.26] vs 3.41 [1.27]; P = .37; U = 13 981.0) and usable draft proportion (mean [SD], 0.69 [0.48] vs 0.65 [0.47], P = .49, t = -0.6842). Usable GenAI responses were considered more empathetic than usable HCP responses (32 of 86 [37.2%] vs 13 of 79 [16.5%]; difference, 125.5%), possibly attributable to more subjective (mean [SD], 0.54 [0.16] vs 0.31 [0.23]; P < .001; difference, 74.2%) and positive (mean [SD] polarity, 0.21 [0.14] vs 0.13 [0.25]; P = .02; difference, 61.5%) language; they were also numerically longer (mean [SD] word count, 90.5 [32.0] vs 65.4 [62.6]; difference, 38.4%), but the difference was not statistically significant (P = .07) and more linguistically complex (mean [SD] score, 125.2 [47.8] vs 95.4 [58.8]; P = .002; difference, 31.2%). Conclusions: In this cross-sectional study of PCP perceptions of an EHR-integrated GenAI chatbot, GenAI was found to communicate information better and with more empathy than HCPs, highlighting its potential to enhance patient-HCP communication. However, GenAI drafts were less readable than HCPs', a significant concern for patients with low health or English literacy.


Subject(s)
Physician-Patient Relations , Humans , Cross-Sectional Studies , Female , Male , Adult , Middle Aged , Communication , Quality Improvement , Artificial Intelligence , Physicians, Primary Care/psychology , Electronic Health Records , Language , Empathy , Attitude of Health Personnel
2.
Article in English | MEDLINE | ID: mdl-38778578

ABSTRACT

OBJECTIVES: To evaluate the proficiency of a HIPAA-compliant version of GPT-4 in identifying actionable, incidental findings from unstructured radiology reports of Emergency Department patients. To assess appropriateness of artificial intelligence (AI)-generated, patient-facing summaries of these findings. MATERIALS AND METHODS: Radiology reports extracted from the electronic health record of a large academic medical center were manually reviewed to identify non-emergent, incidental findings with high likelihood of requiring follow-up, further sub-stratified as "definitely actionable" (DA) or "possibly actionable-clinical correlation" (PA-CC). Instruction prompts to GPT-4 were developed and iteratively optimized using a validation set of 50 reports. The optimized prompt was then applied to a test set of 430 unseen reports. GPT-4 performance was primarily graded on accuracy identifying either DA or PA-CC findings, then secondarily for DA findings alone. Outputs were reviewed for hallucinations. AI-generated patient-facing summaries were assessed for appropriateness via Likert scale. RESULTS: For the primary outcome (DA or PA-CC), GPT-4 achieved 99.3% recall, 73.6% precision, and 84.5% F-1. For the secondary outcome (DA only), GPT-4 demonstrated 95.2% recall, 77.3% precision, and 85.3% F-1. No findings were "hallucinated" outright. However, 2.8% of cases included generated text about recommendations that were inferred without specific reference. The majority of True Positive AI-generated summaries required no or minor revision. CONCLUSION: GPT-4 demonstrates proficiency in detecting actionable, incidental findings after refined instruction prompting. AI-generated patient instructions were most often appropriate, but rarely included inferred recommendations. While this technology shows promise to augment diagnostics, active clinician oversight via "human-in-the-loop" workflows remains critical for clinical implementation.

3.
medRxiv ; 2024 Feb 13.
Article in English | MEDLINE | ID: mdl-38405784

ABSTRACT

Importance: Large language models (LLMs) are crucial for medical tasks. Ensuring their reliability is vital to avoid false results. Our study assesses two state-of-the-art LLMs (ChatGPT and LlaMA-2) for extracting clinical information, focusing on cognitive tests like MMSE and CDR. Objective: Evaluate ChatGPT and LlaMA-2 performance in extracting MMSE and CDR scores, including their associated dates. Methods: Our data consisted of 135,307 clinical notes (Jan 12th, 2010 to May 24th, 2023) mentioning MMSE, CDR, or MoCA. After applying inclusion criteria 34,465 notes remained, of which 765 underwent ChatGPT (GPT-4) and LlaMA-2, and 22 experts reviewed the responses. ChatGPT successfully extracted MMSE and CDR instances with dates from 742 notes. We used 20 notes for fine-tuning and training the reviewers. The remaining 722 were assigned to reviewers, with 309 each assigned to two reviewers simultaneously. Inter-rater-agreement (Fleiss' Kappa), precision, recall, true/false negative rates, and accuracy were calculated. Our study follows TRIPOD reporting guidelines for model validation. Results: For MMSE information extraction, ChatGPT (vs. LlaMA-2) achieved accuracy of 83% (vs. 66.4%), sensitivity of 89.7% (vs. 69.9%), true-negative rates of 96% (vs 60.0%), and precision of 82.7% (vs 62.2%). For CDR the results were lower overall, with accuracy of 87.1% (vs. 74.5%), sensitivity of 84.3% (vs. 39.7%), true-negative rates of 99.8% (98.4%), and precision of 48.3% (vs. 16.1%). We qualitatively evaluated the MMSE errors of ChatGPT and LlaMA-2 on double-reviewed notes. LlaMA-2 errors included 27 cases of total hallucination, 19 cases of reporting other scores instead of MMSE, 25 missed scores, and 23 cases of reporting only the wrong date. In comparison, ChatGPT's errors included only 3 cases of total hallucination, 17 cases of wrong test reported instead of MMSE, and 19 cases of reporting a wrong date. Conclusions: In this diagnostic/prognostic study of ChatGPT and LlaMA-2 for extracting cognitive exam dates and scores from clinical notes, ChatGPT exhibited high accuracy, with better performance compared to LlaMA-2. The use of LLMs could benefit dementia research and clinical care, by identifying eligible patients for treatments initialization or clinical trial enrollments. Rigorous evaluation of LLMs is crucial to understanding their capabilities and limitations.

4.
Infect Control Hosp Epidemiol ; 45(6): 717-725, 2024 Jun.
Article in English | MEDLINE | ID: mdl-38288606

ABSTRACT

BACKGROUND: There is a paucity of data guiding treatment duration of oral vancomycin for Clostridiodes difficile infection (CDI) in patients requiring concomitant systemic antibiotics. OBJECTIVES: To evaluate prescribing practices of vancomycin for CDI in patients that required concurrent systemic antibiotics and to determine whether a prolonged duration of vancomycin (>14 days), compared to a standard duration (10-14 days), decreased CDI recurrence. METHODS: In this retrospective cohort study, we evaluated adult hospitalized patients with an initial episode of CDI who were treated with vancomycin and who received overlapping systemic antibiotics for >72 hours. Outcomes of interest included CDI recurrence and isolation of vancomycin-resistant Enterococcus (VRE). RESULTS: Among the 218 patients included, 36% received a standard duration and 64% received a prolonged duration of treatment for a median of 13 days (11-14) and 20 days (16-26), respectively. Patients who received a prolonged duration had a longer median duration of systemic antibiotic overlap with vancomycin (11 vs 8 days; P < .001) and significantly more carbapenem use and infectious disease consultation. Recurrence at 8 weeks (12% standard duration vs 8% prolonged duration; P = .367), recurrence at 6 months (15% standard duration vs 10% prolonged duration; P = .240), and VRE isolation (3% standard duration vs 9% prolonged duration; P = .083) were not significantly different between groups. Discontinuation of vancomycin prior to completion of antibiotics was an independent predictor of 8-week recurrence on multivariable logistic regression (OR, 4.8; 95% CI, 1.3-18.1). CONCLUSIONS: Oral vancomycin prescribing relative to the systemic antibiotic end date may affect CDI recurrence to a greater extent than total vancomycin duration alone. Further studies are needed to confirm these findings.


Subject(s)
Anti-Bacterial Agents , Clostridioides difficile , Clostridium Infections , Recurrence , Vancomycin , Humans , Vancomycin/administration & dosage , Vancomycin/therapeutic use , Retrospective Studies , Male , Female , Anti-Bacterial Agents/therapeutic use , Anti-Bacterial Agents/administration & dosage , Middle Aged , Clostridium Infections/drug therapy , Aged , Administration, Oral , Aged, 80 and over , Drug Administration Schedule , Vancomycin-Resistant Enterococci , Adult
5.
Res Sq ; 2023 Jul 03.
Article in English | MEDLINE | ID: mdl-37461545

ABSTRACT

Pathology reports are considered the gold standard in medical research due to their comprehensive and accurate diagnostic information. Natural language processing (NLP) techniques have been developed to automate information extraction from pathology reports. However, existing studies suffer from two significant limitations. First, they typically frame their tasks as report classification, which restricts the granularity of extracted information. Second, they often fail to generalize to unseen reports due to variations in language, negation, and human error. To overcome these challenges, we propose a BERT (bidirectional encoder representations from transformers) named entity recognition (NER) system to extract key diagnostic elements from pathology reports. We also introduce four data augmentation methods to improve the robustness of our model. Trained and evaluated on 1438 annotated breast pathology reports, acquired from a large medical center in the United States, our BERT model trained with data augmentation achieves an entity F1-score of 0.916 on an internal test set, surpassing the BERT baseline (0.843). We further assessed the model's generalizability using an external validation dataset from the United Arab Emirates, where our model maintained satisfactory performance (F1-score 0.860). Our findings demonstrate that our NER systems can effectively extract fine-grained information from widely diverse medical reports, offering the potential for large-scale information extraction in a wide range of medical and AI research. We publish our code at https://github.com/nyukat/pathology_extraction.

6.
Appl Clin Inform ; 13(3): 632-640, 2022 05.
Article in English | MEDLINE | ID: mdl-35896506

ABSTRACT

BACKGROUND: We previously developed and validated a predictive model to help clinicians identify hospitalized adults with coronavirus disease 2019 (COVID-19) who may be ready for discharge given their low risk of adverse events. Whether this algorithm can prompt more timely discharge for stable patients in practice is unknown. OBJECTIVES: The aim of the study is to estimate the effect of displaying risk scores on length of stay (LOS). METHODS: We integrated model output into the electronic health record (EHR) at four hospitals in one health system by displaying a green/orange/red score indicating low/moderate/high-risk in a patient list column and a larger COVID-19 summary report visible for each patient. Display of the score was pseudo-randomized 1:1 into intervention and control arms using a patient identifier passed to the model execution code. Intervention effect was assessed by comparing LOS between intervention and control groups. Adverse safety outcomes of death, hospice, and re-presentation were tested separately and as a composite indicator. We tracked adoption and sustained use through daily counts of score displays. RESULTS: Enrolling 1,010 patients from May 15, 2020 to December 7, 2020, the trial found no detectable difference in LOS. The intervention had no impact on safety indicators of death, hospice or re-presentation after discharge. The scores were displayed consistently throughout the study period but the study lacks a causally linked process measure of provider actions based on the score. Secondary analysis revealed complex dynamics in LOS temporally, by primary symptom, and hospital location. CONCLUSION: An AI-based COVID-19 risk score displayed passively to clinicians during routine care of hospitalized adults with COVID-19 was safe but had no detectable impact on LOS. Health technology challenges such as insufficient adoption, nonuniform use, and provider trust compounded with temporal factors of the COVID-19 pandemic may have contributed to the null result. TRIAL REGISTRATION: ClinicalTrials.gov identifier: NCT04570488.


Subject(s)
COVID-19 , Adult , COVID-19/epidemiology , Hospitalization , Humans , Pandemics , Patient Discharge , SARS-CoV-2 , Treatment Outcome
9.
NPJ Digit Med ; 3: 130, 2020.
Article in English | MEDLINE | ID: mdl-33083565

ABSTRACT

The COVID-19 pandemic has challenged front-line clinical decision-making, leading to numerous published prognostic tools. However, few models have been prospectively validated and none report implementation in practice. Here, we use 3345 retrospective and 474 prospective hospitalizations to develop and validate a parsimonious model to identify patients with favorable outcomes within 96 h of a prediction, based on real-time lab values, vital signs, and oxygen support variables. In retrospective and prospective validation, the model achieves high average precision (88.6% 95% CI: [88.4-88.7] and 90.8% [90.8-90.8]) and discrimination (95.1% [95.1-95.2] and 86.8% [86.8-86.9]) respectively. We implemented and integrated the model into the EHR, achieving a positive predictive value of 93.3% with 41% sensitivity. Preliminary results suggest clinicians are adopting these scores into their clinical workflows.

10.
BMC Med Inform Decis Mak ; 20(1): 214, 2020 09 07.
Article in English | MEDLINE | ID: mdl-32894128

ABSTRACT

BACKGROUND: Automated systems that use machine learning to estimate a patient's risk of death are being developed to influence care. There remains sparse transparent reporting of model generalizability in different subpopulations especially for implemented systems. METHODS: A prognostic study included adult admissions at a multi-site, academic medical center between 2015 and 2017. A predictive model for all-cause mortality (including initiation of hospice care) within 60 days of admission was developed. Model generalizability is assessed in temporal validation in the context of potential demographic bias. A subsequent prospective cohort study was conducted at the same sites between October 2018 and June 2019. Model performance during prospective validation was quantified with areas under the receiver operating characteristic and precision recall curves stratified by site. Prospective results include timeliness, positive predictive value, and the number of actionable predictions. RESULTS: Three years of development data included 128,941 inpatient admissions (94,733 unique patients) across sites where patients are mostly white (61%) and female (60%) and 4.2% led to death within 60 days. A random forest model incorporating 9614 predictors produced areas under the receiver operating characteristic and precision recall curves of 87.2 (95% CI, 86.1-88.2) and 28.0 (95% CI, 25.0-31.0) in temporal validation. Performance marginally diverges within sites as the patient mix shifts from development to validation (patients of one site increases from 10 to 38%). Applied prospectively for nine months, 41,728 predictions were generated in real-time (median [IQR], 1.3 [0.9, 32] minutes). An operating criterion of 75% positive predictive value identified 104 predictions at very high risk (0.25%) where 65% (50 from 77 well-timed predictions) led to death within 60 days. CONCLUSION: Temporal validation demonstrates good model discrimination for 60-day mortality. Slight performance variations are observed across demographic subpopulations. The model was implemented prospectively and successfully produced meaningful estimates of risk within minutes of admission.


Subject(s)
Electronic Health Records , Hospitalization , Machine Learning , Patient Admission , Adolescent , Adult , Aged , Aged, 80 and over , Female , Humans , Male , Middle Aged , Mortality , Prognosis , Prospective Studies , Young Adult
11.
JAMIA Open ; 3(2): 243-251, 2020 Jul.
Article in English | MEDLINE | ID: mdl-32734165

ABSTRACT

OBJECTIVE: One primary consideration when developing predictive models is downstream effects on future model performance. We conduct experiments to quantify the effects of experimental design choices, namely cohort selection and internal validation methods, on (estimated) real-world model performance. MATERIALS AND METHODS: Four years of hospitalizations are used to develop a 1-year mortality prediction model (composite of death or initiation of hospice care). Two common methods to select appropriate patient visits from their encounter history (backwards-from-outcome and forwards-from-admission) are combined with 2 testing cohorts (random and temporal validation). Two models are trained under otherwise identical conditions, and their performances compared. Operating thresholds are selected in each test set and applied to a "real-world" cohort of labeled admissions from another, unused year. RESULTS: Backwards-from-outcome cohort selection retains 25% of candidate admissions (n = 23 579), whereas forwards-from-admission selection includes many more (n = 92 148). Both selection methods produce similar performances when applied to a random test set. However, when applied to the temporally defined "real-world" set, forwards-from-admission yields higher areas under the ROC and precision recall curves (88.3% and 56.5% vs. 83.2% and 41.6%). DISCUSSION: A backwards-from-outcome experiment manipulates raw training data, simplifying the experiment. This manipulated data no longer resembles real-world data, resulting in optimistic estimates of test set performance, especially at high precision. In contrast, a forwards-from-admission experiment with a temporally separated test set consistently and conservatively estimates real-world performance. CONCLUSION: Experimental design choices impose bias upon selected cohorts. A forwards-from-admission experiment, validated temporally, can conservatively estimate real-world performance. LAY SUMMARY: The routine care of patients stands to benefit greatly from assistive technologies, including data-driven risk assessment. Already, many different machine learning and artificial intelligence applications are being developed from complex electronic health record data. To overcome challenges that arise from such data, researchers often start with simple experimental approaches to test their work. One key component is how patients (and their healthcare visits) are selected for the study from the pool of all patients seen. Another is how the group of patients used to create the risk estimator differs from the group used to evaluate how well it works. These choices complicate how the experimental setting compares to the real-world application to patients. For example, different selection approaches that depend on each patient's future outcome can simplify the experiment but are impractical upon implementation as these data are unavailable. We show that this kind of "backwards" experiment optimistically estimates how well the model performs. Instead, our results advocate for experiments that select patients in a "forwards" manner and "temporal" validation that approximates training on past data and implementing on future data. More robust results help gauge the clinical utility of recent works and aid decision-making before implementation into practice.

13.
Comput Methods Programs Biomed ; 171: 67-79, 2019 Apr.
Article in English | MEDLINE | ID: mdl-27697371

ABSTRACT

Monitoring of respiratory mechanics is required for guiding patient-specific mechanical ventilation settings in critical care. Many models of respiratory mechanics perform poorly in the presence of variable patient effort. Typical modelling approaches either attempt to mitigate the effect of the patient effort on the airway pressure waveforms, or attempt to capture the size and shape of the patient effort. This work analyses a range of methods to identify respiratory mechanics in volume controlled ventilation modes when there is patient effort. The models are compared using 4 Datasets, each with a sample of 30 breaths before, and 2-3 minutes after sedation has been administered. The sedation will reduce patient efforts, but the underlying pulmonary mechanical properties are unlikely to change during this short time. Model identified parameters from breathing cycles with patient effort are compared to breathing cycles that do not have patient effort. All models have advantages and disadvantages, so model selection may be specific to the respiratory mechanics application. However, in general, the combined method of iterative interpolative pressure reconstruction, and stacking multiple consecutive breaths together has the best performance over the Dataset. The variability of identified elastance when there is patient effort is the lowest with this method, and there is little systematic offset in identified mechanics when sedation is administered.


Subject(s)
Biostatistics , Models, Statistical , Respiratory Function Tests/standards , Respiratory Mechanics/physiology , Critical Care , Databases, Factual , Humans , Respiratory Insufficiency/physiopathology
14.
Biomed Eng Online ; 17(1): 169, 2018 Nov 12.
Article in English | MEDLINE | ID: mdl-30419903

ABSTRACT

BACKGROUND: Mechanical ventilation is an essential therapy to support critically ill respiratory failure patients. Current standards of care consist of generalised approaches, such as the use of positive end expiratory pressure to inspired oxygen fraction (PEEP-FiO2) tables, which fail to account for the inter- and intra-patient variability between and within patients. The benefits of higher or lower tidal volume, PEEP, and other settings are highly debated and no consensus has been reached. Moreover, clinicians implicitly account for patient-specific factors such as disease condition and progression as they manually titrate ventilator settings. Hence, care is highly variable and potentially often non-optimal. These conditions create a situation that could benefit greatly from an engineered approach. The overall goal is a review of ventilation that is accessible to both clinicians and engineers, to bridge the divide between the two fields and enable collaboration to improve patient care and outcomes. This review does not take the form of a typical systematic review. Instead, it defines the standard terminology and introduces key clinical and biomedical measurements before introducing the key clinical studies and their influence in clinical practice which in turn flows into the needs and requirements around how biomedical engineering research can play a role in improving care. Given the significant clinical research to date and its impact on this complex area of care, this review thus provides a tutorial introduction around the review of the state of the art relevant to a biomedical engineering perspective. DISCUSSION: This review presents the significant clinical aspects and variables of ventilation management, the potential risks associated with suboptimal ventilation management, and a review of the major recent attempts to improve ventilation in the context of these variables. The unique aspect of this review is a focus on these key elements relevant to engineering new approaches. In particular, the need for ventilation strategies which consider, and directly account for, the significant differences in patient condition, disease etiology, and progression within patients is demonstrated with the subsequent requirement for optimal ventilation strategies to titrate for patient- and time-specific conditions. CONCLUSION: Engineered, protective lung strategies that can directly account for and manage inter- and intra-patient variability thus offer great potential to improve both individual care, as well as cohort clinical outcomes.


Subject(s)
Biomedical Engineering , Critical Care , Positive-Pressure Respiration/instrumentation , Respiration, Artificial/instrumentation , Animals , Critical Illness , Humans , Lung , Lung Injury/etiology , Oscillometry , Oxygen/blood , Oxygen/chemistry , Positive-Pressure Respiration/methods , Pressure , Respiration, Artificial/methods , Respiratory Distress Syndrome/therapy , Risk , Tidal Volume , Ventilators, Mechanical
15.
AMIA Jt Summits Transl Sci Proc ; 2017: 104-112, 2018.
Article in English | MEDLINE | ID: mdl-29888051

ABSTRACT

Natural Language Processing (NLP) holds potential for patient care and clinical research, but a gap exists between promise and reality. While some studies have demonstrated portability of NLP systems across multiple sites, challenges remain. Strategies to mitigate these challenges can strive for complex NLP problems using advanced methods (hard-to-reach fruit), or focus on simple NLP problems using practical methods (low-hanging fruit). This paper investigates a practical strategy for NLP portability using extraction of left ventricular ejection fraction (LVEF) as a use case. We used a tool developed at the Department of Veterans Affair (VA) to extract the LVEF values from free-text echocardiograms in the MIMIC-III database. The approach showed an accuracy of 98.4%, sensitivity of 99.4%, a positive predictive value of 98.7%, and F-score of 99.0%. This experience, in which a simple NLP solution proved highly portable with excellent performance, illustrates the point that simple NLP applications may be easier to disseminate and adapt, and in the short term may prove more useful, than complex applications.

16.
AMIA Annu Symp Proc ; 2018: 1405-1414, 2018.
Article in English | MEDLINE | ID: mdl-30815185

ABSTRACT

Conventional text classification models make a bag-of-words assumption reducing text into word occurrence counts per document. Recent algorithms such as word2vec are capable of learning semantic meaning and similarity between words in an entirely unsupervised manner using a contextual window and doing so much faster than previous methods. Each word is projected into vector space such that similar meaning words such as "strong" and "powerful" are projected into the same general Euclidean space. Open questions about these embeddings include their utility across classification tasks and the optimal properties and source of documents to construct broadly functional embeddings. In this work, we demonstrate the usefulness of pre-trained embeddings for classification in our task and demonstrate that custom word embeddings, built in the domain and for the tasks, can improve performance over word embeddings learnt on more general data including news articles or Wikipedia.


Subject(s)
Algorithms , Natural Language Processing , Bayes Theorem , Decision Trees , Humans , Logistic Models , Semantics , Support Vector Machine
17.
AMIA Annu Symp Proc ; 2016: 844-853, 2016.
Article in English | MEDLINE | ID: mdl-28269881

ABSTRACT

Complex medical data sometimes requires significant data preprocessing to prepare for analysis. The complexity can lead non-domain experts to apply simple filters of available data or to not use the data at all. The preprocessing choices can also have serious effects on the results of the study if incorrect decision or missteps are made. In this work, we present open-source data filters for an analysis motivated by understanding mortality in the context of sepsis- associated cardiomyopathy in the ICU. We report specific ICU filters and validations through chart review and graphs. These published filters reduce the complexity of using data in analysis by (1) encapsulating the domain expertise and feature engineering applied to the filter, by (2) providing debugged and ready code for use, and by (3) providing sensible validations. We intend these filters to evolve through pull requests and forks and serve as common starting points for specific analyses.


Subject(s)
Cardiomyopathies/etiology , Databases, Factual , Information Storage and Retrieval/methods , Intensive Care Units/organization & administration , Sepsis/complications , Software , Adult , Aged , Aged, 80 and over , Cardiomyopathies/mortality , Cardiomyopathies/therapy , Echocardiography , Female , Hospital Mortality , Humans , Logistic Models , Male , Medical Records Systems, Computerized , Middle Aged , Organizational Case Studies
18.
Article in English | MEDLINE | ID: mdl-26737491

ABSTRACT

Asynchronous Events (AEs) during mechanical ventilation (MV) result in increased work of breathing and potential poor patient outcomes. Thus, it is important to automate AE detection. In this study, an AE detection method, Automated Logging of Inspiratory and Expiratory Non-synchronized breathing (ALIEN) was developed and compared between standard manual detection in 11 MV patients. A total of 5701 breaths were analyzed (median [IQR]: 500 [469-573] per patient). The Asynchrony Index (AI) was 51% [28-78]%. The AE detection yielded sensitivity of 90.3% and specificity of 88.3%. Automated AE detection methods can potentially provide clinicians with real-time information on patient-ventilator interaction.


Subject(s)
Respiration, Artificial/methods , Automation , Exhalation , Humans , Respiration
19.
Biomed Eng Online ; 13: 140, 2014 Sep 30.
Article in English | MEDLINE | ID: mdl-25270094

ABSTRACT

BACKGROUND: Real-time patient respiratory mechanics estimation can be used to guide mechanical ventilation settings, particularly, positive end-expiratory pressure (PEEP). This work presents a software, Clinical Utilisation of Respiratory Elastance (CURE Soft), using a time-varying respiratory elastance model to offer this ability to aid in mechanical ventilation treatment. IMPLEMENTATION: CURE Soft is a desktop application developed in JAVA. It has two modes of operation, 1) Online real-time monitoring decision support and, 2) Offline for user education purposes, auditing, or reviewing patient care. The CURE Soft has been tested in mechanically ventilated patients with respiratory failure. The clinical protocol, software testing and use of the data were approved by the New Zealand Southern Regional Ethics Committee. RESULTS AND DISCUSSION: Using CURE Soft, patient's respiratory mechanics response to treatment and clinical protocol were monitored. Results showed that the patient's respiratory elastance (Stiffness) changed with the use of muscle relaxants, and responded differently to ventilator settings. This information can be used to guide mechanical ventilation therapy and titrate optimal ventilator PEEP. CONCLUSION: CURE Soft enables real-time calculation of model-based respiratory mechanics for mechanically ventilated patients. Results showed that the system is able to provide detailed, previously unavailable information on patient-specific respiratory mechanics and response to therapy in real-time. The additional insight available to clinicians provides the potential for improved decision-making, and thus improved patient care and outcomes.


Subject(s)
Respiratory Mechanics/physiology , Software , Humans , Positive-Pressure Respiration/methods , Respiration, Artificial/methods , Ventilators, Mechanical
SELECTION OF CITATIONS
SEARCH DETAIL
...