Your browser doesn't support javascript.
Show: 20 | 50 | 100
Results 1 - 20 de 57
Filter
Add filters

Document Type
Year range
1.
PLoS One ; 17(1): e0262193, 2022.
Article in English | MEDLINE | ID: covidwho-1606289

ABSTRACT

OBJECTIVE: To prospectively evaluate a logistic regression-based machine learning (ML) prognostic algorithm implemented in real-time as a clinical decision support (CDS) system for symptomatic persons under investigation (PUI) for Coronavirus disease 2019 (COVID-19) in the emergency department (ED). METHODS: We developed in a 12-hospital system a model using training and validation followed by a real-time assessment. The LASSO guided feature selection included demographics, comorbidities, home medications, vital signs. We constructed a logistic regression-based ML algorithm to predict "severe" COVID-19, defined as patients requiring intensive care unit (ICU) admission, invasive mechanical ventilation, or died in or out-of-hospital. Training data included 1,469 adult patients who tested positive for Severe Acute Respiratory Syndrome Coronavirus 2 (SARS-CoV-2) within 14 days of acute care. We performed: 1) temporal validation in 414 SARS-CoV-2 positive patients, 2) validation in a PUI set of 13,271 patients with symptomatic SARS-CoV-2 test during an acute care visit, and 3) real-time validation in 2,174 ED patients with PUI test or positive SARS-CoV-2 result. Subgroup analysis was conducted across race and gender to ensure equity in performance. RESULTS: The algorithm performed well on pre-implementation validations for predicting COVID-19 severity: 1) the temporal validation had an area under the receiver operating characteristic (AUROC) of 0.87 (95%-CI: 0.83, 0.91); 2) validation in the PUI population had an AUROC of 0.82 (95%-CI: 0.81, 0.83). The ED CDS system performed well in real-time with an AUROC of 0.85 (95%-CI, 0.83, 0.87). Zero patients in the lowest quintile developed "severe" COVID-19. Patients in the highest quintile developed "severe" COVID-19 in 33.2% of cases. The models performed without significant differences between genders and among race/ethnicities (all p-values > 0.05). CONCLUSION: A logistic regression model-based ML-enabled CDS can be developed, validated, and implemented with high performance across multiple hospitals while being equitable and maintaining performance in real-time validation.


Subject(s)
COVID-19/diagnosis , Decision Support Systems, Clinical , Logistic Models , Machine Learning , Triage/methods , COVID-19/physiopathology , Emergency Service, Hospital , Humans , ROC Curve , Severity of Illness Index
2.
PLoS One ; 16(3): e0247773, 2021.
Article in English | MEDLINE | ID: covidwho-1575465

ABSTRACT

BACKGROUND: The coronavirus infectious disease 19 (COVID-19) pandemic has resulted in significant morbidities, severe acute respiratory failures and subsequently emergency departments' (EDs) overcrowding in a context of insufficient laboratory testing capacities. The development of decision support tools for real-time clinical diagnosis of COVID-19 is of prime importance to assist patients' triage and allocate resources for patients at risk. METHODS AND PRINCIPAL FINDINGS: From March 2 to June 15, 2020, clinical patterns of COVID-19 suspected patients at admission to the EDs of Liège University Hospital, consisting in the recording of eleven symptoms (i.e. dyspnoea, chest pain, rhinorrhoea, sore throat, dry cough, wet cough, diarrhoea, headache, myalgia, fever and anosmia) plus age and gender, were investigated during the first COVID-19 pandemic wave. Indeed, 573 SARS-CoV-2 cases confirmed by qRT-PCR before mid-June 2020, and 1579 suspected cases that were subsequently determined to be qRT-PCR negative for the detection of SARS-CoV-2 were enrolled in this study. Using multivariate binary logistic regression, two most relevant symptoms of COVID-19 were identified in addition of the age of the patient, i.e. fever (odds ratio [OR] = 3.66; 95% CI: 2.97-4.50), dry cough (OR = 1.71; 95% CI: 1.39-2.12), and patients older than 56.5 y (OR = 2.07; 95% CI: 1.67-2.58). Two additional symptoms (chest pain and sore throat) appeared significantly less associated to the confirmed COVID-19 cases with the same OR = 0.73 (95% CI: 0.56-0.94). An overall pondered (by OR) score (OPS) was calculated using all significant predictors. A receiver operating characteristic (ROC) curve was generated and the area under the ROC curve was 0.71 (95% CI: 0.68-0.73) rendering the use of the OPS to discriminate COVID-19 confirmed and unconfirmed patients. The main predictors were confirmed using both sensitivity analysis and classification tree analysis. Interestingly, a significant negative correlation was observed between the OPS and the cycle threshold (Ct values) of the qRT-PCR. CONCLUSION AND MAIN SIGNIFICANCE: The proposed approach allows for the use of an interactive and adaptive clinical decision support tool. Using the clinical algorithm developed, a web-based user-interface was created to help nurses and clinicians from EDs with the triage of patients during the second COVID-19 wave.


Subject(s)
COVID-19 Testing , COVID-19/diagnosis , Decision Support Systems, Clinical , Adult , Aged , Cough/diagnosis , Dyspnea/diagnosis , Female , Fever/diagnosis , Headache/diagnosis , Hospitals , Humans , Male , Middle Aged , Pharyngitis/diagnosis , SARS-CoV-2/isolation & purification
4.
Stud Health Technol Inform ; 285: 31-38, 2021 Oct 27.
Article in English | MEDLINE | ID: covidwho-1502261

ABSTRACT

The Covid-19 pandemic has only accelerated the need and desire to deal more openly with mortality, because the effect on survival is central to the comprehensive assessment of harms and benefits needed to meet a 'reasonable patient' legal standard. Taking the view that this requirement is best met through a multi-criterial decision support tool, we offer our preferred answers to the questions of What should be communicated about mortality in the tool, and How, given preferred answers to Who for, Who by, Why, When, and Where. Summary measures, including unrestricted Life Expectancy and Restricted Mean Survival Time are found to be reductionist and relative, and not as easy to understand and communicate as often asserted. Full lifetime absolute survival curves should be presented, even if they cannot be 'evidence-based' beyond trial follow-up limits, along with equivalent measures for other criteria in the (necessarily) multi-criterial decision. A decision support tool should relieve the reasonable person of the resulting calculation burden.


Subject(s)
Advance Care Planning , Decision Support Systems, Clinical , COVID-19 , Humans , Pandemics
5.
West J Emerg Med ; 21(5): 1201-1210, 2020 Aug 24.
Article in English | MEDLINE | ID: covidwho-1456475

ABSTRACT

INTRODUCTION: For early detection of sepsis, automated systems within the electronic health record have evolved to alert emergency department (ED) personnel to the possibility of sepsis, and in some cases link them to suggested care pathways. We conducted a systematic review of automated sepsis-alert detection systems in the ED. METHODS: We searched multiple health literature databases from the earliest available dates to August 2018. Articles were screened based on abstract, again via manuscript, and further narrowed with set inclusion criteria: 1) adult patients in the ED diagnosed with sepsis, severe sepsis, or septic shock; 2) an electronic system that alerts a healthcare provider of sepsis in real or near-real time; and 3) measures of diagnostic accuracy or quality of sepsis alerts. The final, detailed review was guided by QUADAS-2 and GRADE criteria. We tracked all articles using an online tool (Covidence), and the review was registered with PROSPERO registry of reviews. A two-author consensus was reached at the article choice stage and final review stage. Due to the variation in alert criteria and methods of sepsis diagnosis confirmation, the data were not combined for meta-analysis. RESULTS: We screened 693 articles by title and abstract and 20 by full text; we then selected 10 for the study. The articles were published between 2009-2018. Two studies had algorithm-based alert systems, while eight had rule-based alert systems. All systems used different criteria based on systemic inflammatory response syndrome (SIRS) to define sepsis. Sensitivities ranged from 10-100%, specificities from 78-99%, and positive predictive value from 5.8-54%. Negative predictive value was consistently high at 99-100%. Studies showed some evidence for improved process-of-care markers, including improved time to antibiotics. Length of stay improved in two studies. One low quality study showed improved mortality. CONCLUSION: The limited evidence available suggests that sepsis alerts in the ED setting can be set to high sensitivity. No high-quality studies showed a difference in mortality, but evidence exists for improvements in process of care. Significant further work is needed to understand the consequences of alert fatigue and sensitivity set points.


Subject(s)
Decision Support Systems, Clinical/standards , Early Diagnosis , Emergency Service, Hospital/organization & administration , Sepsis/diagnosis , Critical Pathways , Humans , Quality Improvement
6.
Int J Med Inform ; 135: 104066, 2020 03.
Article in English | MEDLINE | ID: covidwho-1454190

ABSTRACT

IMPORTANCE: Anticoagulants are high-risk medications with the potential to cause significant patient harm or death. Digital transformation is occurring in hospital practice and it is essential to implement effective, evidence-based strategies for these medications in an electronic medical record (EMR). OBJECTIVE: To systematically appraise the literature to determine which EMR interventions have improved the safety and quality of therapeutic anticoagulation in an inpatient hospital setting. METHODS: PubMed, Embase, CINAHL, and the International Pharmaceutical Database were searched for suitable publications. Articles that met eligibility criteria up to September 2018 were included. The review was registered with PROSPERO (CRD42018104899). The web-based software platform Covidence® was used for screening and data extraction. Studies were grouped according to the type of intervention and the outcomes measured. Where relevant, a bias assessment was performed. RESULTS: We found 2624 candidate articles and 27 met inclusion criteria. They included 3 randomised controlled trials, 4 cohort studies and 20 pre/post observational studies. There were four major interventions; computerised physician order entry (CPOE) (n = 4 studies), clinical decision support system (CDSS) methods (n = 21), dashboard utilisation (n = 1) and EMR implementation in general (n = 1). Seven outcomes were used to summarise the study results. Most research focused on prescribing or documentation compliance (n = 18). The remaining study outcome measures were: medication errors (n = 9), adverse drug events (n = 5), patient outcomes (morbidity/mortality/length of hospital stay/re-hospitalisation) (n = 5), quality use of anticoagulant (n = 4), end-user acceptance (n = 4), cost effectiveness (n = 1). CONCLUSION: Despite the research cited, limited benefits have been demonstrated to date. It appears healthcare organisations are yet to determine optimal, evidence-based-methods to improve EMR utilisation. Further evaluation, collaboration and work are necessary to measure and leverage the potential benefits of digital health systems. Most research evaluating therapeutic anticoagulation management within an EMR focused on prescribing or documentation compliance, with less focus on clinical impact to the patient or cost effectiveness. Evidence suggests that CPOE in conjunction with CDSS is needed to effectively manage therapeutic anticoagulation. Targets for robust research include the integration of 'stealth' alerts, nomograms into digital systems and the use of dashboards within clinical practice.


Subject(s)
Anticoagulants/therapeutic use , Electronic Health Records , Anticoagulants/adverse effects , Decision Support Systems, Clinical , Humans , Inpatients , Medical Order Entry Systems , Medication Errors/prevention & control
7.
Sci Rep ; 11(1): 18464, 2021 09 16.
Article in English | MEDLINE | ID: covidwho-1415958

ABSTRACT

With the outbreak of COVID-19 exerting a strong pressure on hospitals and health facilities, clinical decision support systems based on predictive models can help to effectively improve the management of the pandemic. We present a method for predicting mortality for COVID-19 patients. Starting from a large number of clinical variables, we select six of them with largest predictive power, using a feature selection method based on genetic algorithms and starting from a set of COVID-19 patients from the first wave. The algorithm is designed to reduce the impact of missing values in the set of variables measured, and consider only variables that show good accuracy on validation data. The final predictive model provides accuracy larger than 85% on test data, including a new patient cohort from the second COVID-19 wave, and on patients with imputed missing values. The selected clinical variables are confirmed to be relevant by recent literature on COVID-19.


Subject(s)
COVID-19/mortality , Algorithms , Cohort Studies , Decision Support Systems, Clinical , Humans , Machine Learning , Models, Theoretical , Mortality
8.
Sensors (Basel) ; 21(18)2021 Sep 16.
Article in English | MEDLINE | ID: covidwho-1410904

ABSTRACT

Edge computing is a fast-growing and much needed technology in healthcare. The problem of implementing artificial intelligence on edge devices is the complexity and high resource intensity of the most known neural network data analysis methods and algorithms. The difficulty of implementing these methods on low-power microcontrollers with small memory size calls for the development of new effective algorithms for neural networks. This study presents a new method for analyzing medical data based on the LogNNet neural network, which uses chaotic mappings to transform input information. The method effectively solves classification problems and calculates risk factors for the presence of a disease in a patient according to a set of medical health indicators. The efficiency of LogNNet in assessing perinatal risk is illustrated on cardiotocogram data obtained from the UC Irvine machine learning repository. The classification accuracy reaches ~91% with the~3-10 kB of RAM used on the Arduino microcontroller. Using the LogNNet network trained on a publicly available database of the Israeli Ministry of Health, a service concept for COVID-19 express testing is provided. A classification accuracy of ~95% is achieved, and~0.6 kB of RAM is used. In all examples, the model is tested using standard classification quality metrics: precision, recall, and F1-measure. The LogNNet architecture allows the implementation of artificial intelligence on medical peripherals of the Internet of Things with low RAM resources and can be used in clinical decision support systems.


Subject(s)
COVID-19 , Decision Support Systems, Clinical , Artificial Intelligence , Data Analysis , Delivery of Health Care , Humans , SARS-CoV-2
9.
Yearb Med Inform ; 30(1): 172-175, 2021 Aug.
Article in English | MEDLINE | ID: covidwho-1392952

ABSTRACT

OBJECTIVES: To summarize research contributions published in 2020 in the field of clinical decision support systems (CDSS) and computerized provider order entry (CPOE), and select the best papers for the Decision Support section of the International Medical Informatics Association (IMIA) Yearbook 2021. METHODS: Two bibliographic databases were searched for papers referring to clinical decision support systems. From search results, section editors established a list of candidate best papers, which were then peer-reviewed by seven external reviewers. The IMIA Yearbook editorial committee finally selected the best papers on the basis of all reviews including the section editors' evaluation. RESULTS: A total of 1,919 articles were retrieved. 15 best paper candidates were selected, the reviews of which resulted in the selection of two best papers. One paper reports on the use of electronic health records to support a public health response to the COVID-19 pandemic in the United States. The second paper proposes a combination of CDSS and telemedicine as a technology-based intervention to improve the outcomes of depression as part of a cluster trial. CONCLUSIONS: As shown by the number and the variety of works related to clinical decision support, research in the field is very active. This year's selection highlighted the application of CDSS to fight COVID-19 and a combined technology-based strategy to improve the treatment of depression.


Subject(s)
Decision Support Systems, Clinical , Medical Order Entry Systems , Telemedicine , COVID-19 , Depression/therapy , Humans
10.
Yearb Med Inform ; 30(1): 105-125, 2021 Aug.
Article in English | MEDLINE | ID: covidwho-1392946

ABSTRACT

OBJECTIVE: The year 2020 was predominated by the coronavirus disease 2019 (COVID-19) pandemic. The objective of this article is to review the areas in which clinical information systems (CIS) can be and have been utilized to support and enhance the response of healthcare systems to pandemics, focusing on COVID-19. METHODS: PubMed/MEDLINE, Google Scholar, the tables of contents of major informatics journals, and the bibliographies of articles were searched for studies pertaining to CIS, pandemics, and COVID-19 through October 2020. The most informative and detailed studies were highlighted, while many others were referenced. RESULTS: CIS were heavily relied upon by health systems and governmental agencies worldwide in response to COVID-19. Technology-based screening tools were developed to assist rapid case identification and appropriate triaging. Clinical care was supported by utilizing the electronic health record (EHR) to onboard frontline providers to new protocols, offer clinical decision support, and improve systems for diagnostic testing. Telehealth became the most rapidly adopted medical trend in recent history and an essential strategy for allowing safe and effective access to medical care. Artificial intelligence and machine learning algorithms were developed to enhance screening, diagnostic imaging, and predictive analytics - though evidence of improved outcomes remains limited. Geographic information systems and big data enabled real-time dashboards vital for epidemic monitoring, hospital preparedness strategies, and health policy decision making. Digital contact tracing systems were implemented to assist a labor-intensive task with the aim of curbing transmission. Large scale data sharing, effective health information exchange, and interoperability of EHRs remain challenges for the informatics community with immense clinical and academic potential. CIS must be used in combination with engaged stakeholders and operational change management in order to meaningfully improve patient outcomes. CONCLUSION: Managing a pandemic requires widespread, timely, and effective distribution of reliable information. In the past year, CIS and informaticists made prominent and influential contributions in the global response to the COVID-19 pandemic.


Subject(s)
COVID-19 , Information Systems , Medical Informatics , Telemedicine , Artificial Intelligence , COVID-19/diagnosis , COVID-19 Testing , Contact Tracing , Decision Support Systems, Clinical , Electronic Health Records , Epidemics , Health Information Exchange , Health Information Interoperability , Humans , Information Dissemination
11.
J Mol Diagn ; 23(9): 1085-1096, 2021 09.
Article in English | MEDLINE | ID: covidwho-1370607

ABSTRACT

Widespread high-throughput testing for identification of severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) infection by RT-PCR has been a foundation in the response to the coronavirus disease 2019 (COVID-19) pandemic. Quality assurance metrics for these RT-PCR tests are still evolving as testing is widely implemented. As testing increases, it is important to understand performance characteristics and the errors associated with these tests. Herein, we investigate a high-throughput, laboratory-developed SARS-CoV-2 RT-PCR assay to determine whether modeling can generate quality control metrics that identify false-positive (FP) results due to contamination. This study reviewed repeated clinical samples focusing on positive samples that test negative on re-extraction and PCR, likely representing false positives. To identify and predict false-positive samples, we constructed machine learning-derived models based on the extraction method used. These models identified variables associated with false-positive results across all methods, with sensitivities for predicting FP results ranging between 67% and 100%. Application of the models to all results predicted a total FP rate of 0.08% across all samples, or 2.3% of positive results, similar to reports for other RT-PCR tests for RNA viruses. These models can predict quality control parameters, enabling laboratories to generate decision trees that reduce interpretation errors, allow for automated reflex testing of samples with a high FP probability, improve workflow efficiency, and increase diagnostic accuracy for patient care.


Subject(s)
COVID-19 Nucleic Acid Testing/methods , RNA, Viral/isolation & purification , Reverse Transcriptase Polymerase Chain Reaction/methods , Automation, Laboratory , Carrier State/virology , Decision Support Systems, Clinical , False Positive Reactions , High-Throughput Nucleotide Sequencing/methods , Humans , Machine Learning , SARS-CoV-2/genetics , Viral Load , Workflow
12.
PLoS One ; 16(8): e0255383, 2021.
Article in English | MEDLINE | ID: covidwho-1357430

ABSTRACT

BACKGROUND: In 2019, a majority of runners participating in running events were female and 49% were of childbearing age. Studies have reported that women are initiating or returning to running after childbirth with up to 35% reporting pain. There are no studies exploring running-related pain or risk factors for this pain after childbirth in runners. Postpartum runners have a variety of biomechanical, musculoskeletal, and physiologic impairments from which to recover from when returning to high impact sports like running, which could influence initiating or returning to running. Therefore, the purpose of this study was to identify risk factors associated with running-related pain in postpartum runners with and without pain. This study also aimed to understand the compounding effects of multiple associative risk factors by developing a clinical decision tool to identify postpartum runners at higher risk for pain. METHODS: Postpartum runners with at least one child ≤36 months who ran once a week and postpartum runners unable to run because of pain, but identified as runners, were surveyed. Running variables (mileage, time to first postpartum run), postpartum variables (delivery type, breastfeeding, incontinence, sleep, fatigue, depression), and demographic information were collected. Risk factors for running-related pain were analyzed in bivariate regression models. Variables meeting criteria (P<0.15) were entered into a multivariate logistic regression model to create a clinical decision tool. The tool identified compounding factors that increased the probability of having running-related pain after childbirth. RESULTS: Analyses included 538 postpartum runners; 176 (32.7%) reporting running-related pain. Eleven variables were included in the multivariate model with six retained in the clinical decision tool: runner type-novice (OR 3.51; 95% CI 1.65, 7.48), postpartum accumulated fatigue score of >19 (OR 2.48; 95% CI 1.44, 4.28), previous running injury (OR 1.95; 95% CI 1.31, 2.91), vaginal delivery (OR 1.63; 95% CI 1.06, 2.50), incontinence (OR 1.95; 95% CI 1.31, 2.84) and <6.8 hours of sleep on average per night (OR 1.89; 95% CI 1.28, 2.78). Having ≥ 4 risk factors increased the probability of having running-related pain to 61.2%. CONCLUSION: The results of this study provide a deeper understanding of the risk factors for running-related pain in postpartum runners. With this information, clinicians can monitor and educate postpartum runners initiating or returning to running. Education could include details of risk factors, combinations of factors for pain and strategies to mitigate risks. Coaches can adapt running workload accounting for fatigue and sleep fluctuations to optimize recovery and performance. Future longitudinal studies that follow asymptomatic postpartum women returning to running after childbirth over time should be performed to validate these findings.


Subject(s)
Pain/epidemiology , Postpartum Period/psychology , Running/physiology , Adult , Cross-Sectional Studies , Decision Support Systems, Clinical , Female , Humans , Logistic Models , Pain/etiology , Postpartum Period/physiology , Regression Analysis , Risk Factors , Running/psychology
13.
JAMA Netw Open ; 4(7): e2117809, 2021 07 01.
Article in English | MEDLINE | ID: covidwho-1320051

ABSTRACT

Importance: Hospitalized children are at increased risk of influenza-related complications, yet influenza vaccine coverage remains low among this group. Evidence-based strategies about vaccination of vulnerable children during all health care visits are especially important during the COVID-19 pandemic. Objective: To design and evaluate a clinical decision support (CDS) strategy to increase the proportion of eligible hospitalized children who receive a seasonal influenza vaccine prior to inpatient discharge. Design, Setting, and Participants: This quality improvement study was conducted among children eligible for the seasonal influenza vaccine who were hospitalized in a tertiary pediatric health system providing care to more than half a million patients annually in 3 hospitals. The study used a sequential crossover design from control to intervention and compared hospitalizations in the intervention group (2019-2020 season with the use of an intervention order set) with concurrent controls (2019-2020 season without use of an intervention order set) and historical controls (2018-2019 season with use of an order set that underwent intervention during the 2019-2020 season). Interventions: A CDS intervention was developed through a user-centered design process, including (1) placing a default influenza vaccine order into admission order sets for eligible patients, (2) a script to offer the vaccine using a presumptive strategy, and (3) just-in-time education for clinicians addressing vaccine eligibility in the influenza order group with links to further reference material. The intervention was rolled out in a stepwise fashion during the 2019-2020 influenza season. Main Outcomes and Measures: Proportion of eligible hospitalizations in which 1 or more influenza vaccines were administered prior to discharge. Results: Among 17 740 hospitalizations (9295 boys [52%]), the mean (SD) age was 8.0 (6.0) years, and the patients were predominantly Black (n = 8943 [50%]) or White (n = 7559 [43%]) and mostly had public insurance (n = 11 274 [64%]). There were 10 997 hospitalizations eligible for the influenza vaccine in the 2019-2020 season. Of these, 5449 (50%) were in the intervention group, and 5548 (50%) were concurrent controls. There were 6743 eligible hospitalizations in 2018-2019 that served as historical controls. Vaccine administration rates were 31% (n = 1676) in the intervention group, 19% (n = 1051) in concurrent controls, and 14% (n = 912) in historical controls (P < .001). In adjusted analyses, the odds of receiving the influenza vaccine were 3.25 (95% CI, 2.94-3.59) times higher in the intervention group and 1.28 (95% CI, 1.15-1.42) times higher in concurrent controls than in historical controls. Conclusions and Relevance: This quality improvement study suggests that user-centered CDS may be associated with significantly improved influenza vaccination rates among hospitalized children. Stepwise implementation of CDS interventions was a practical method that was used to increase quality improvement rigor through comparison with historical and concurrent controls.


Subject(s)
Child, Hospitalized , Decision Support Systems, Clinical , Influenza Vaccines , Influenza, Human/prevention & control , Patient Discharge , Vaccination Coverage , Adolescent , COVID-19 , Child , Child, Preschool , Cross-Over Studies , Humans , Pandemics , Patient Selection , Pediatrics , SARS-CoV-2 , Seasons , Vaccination
14.
Nutrients ; 13(6)2021 Jun 20.
Article in English | MEDLINE | ID: covidwho-1273495

ABSTRACT

Clinical decision support systems (CDSS) are data aggregation tools based on computer technology that assist clinicians to promote healthy weight management and prevention of cardiovascular diseases. We carried out a randomised controlled 3-month trial to implement lifestyle modifications in breast cancer (BC) patients by means of CDSS during the COVID-19 pandemic. In total, 55 BC women at stages I-IIIA were enrolled. They were randomly assigned either to Control group, receiving general lifestyle advice (n = 28) or the CDSS group (n = 27), to whom the CDSS provided personalised dietary plans based on the Mediterranean diet (MD) together with physical activity guidelines. Food data, anthropometry, blood markers and quality of life were evaluated. At 3 months, higher adherence to MD was recorded in the CDSS group, accompanied by lower body weight (kg) and body fat mass percentage compared to control (p < 0.001). In the CDSS arm, global health/quality of life was significantly improved at the trial endpoint (p < 0.05). Fasting blood glucose and lipid levels (i.e., cholesterol, LDL, triacylglycerols) of the CDSS arm remained unchanged (p > 0.05) but were elevated in the control arm at 3 months (p < 0.05). In conclusion, CDSS could be a promising tool to assist BC patients with lifestyle modifications during the COVID-19 pandemic.


Subject(s)
Breast Neoplasms , COVID-19 , Decision Support Systems, Clinical , Diet, Mediterranean , Life Style , Obesity/prevention & control , Pandemics , Adipose Tissue/metabolism , Adult , Behavior Therapy , Blood Glucose/metabolism , Body Mass Index , Body Weight , Cholesterol, LDL/blood , Female , Health Status , Humans , Middle Aged , Obesity/etiology , Patient Compliance , Quality of Life , SARS-CoV-2 , Triglycerides/blood
15.
Chest ; 160(4): 1222-1231, 2021 10.
Article in English | MEDLINE | ID: covidwho-1248852

ABSTRACT

BACKGROUND: The Hospitalization or Outpatient Management of Patients With SARS-CoV-2 Infection (HOME-CoV) rule is a checklist of eligibility criteria for home treatment of patients with COVID-19, defined using a Delphi method. RESEARCH QUESTION: Is the HOME-CoV rule reliable for identifying a subgroup of COVID-19 patients with a low risk of adverse outcomes who can be treated at home safely? STUDY DESIGN AND METHODS: We aimed to validate the HOME-CoV rule in a prospective, multicenter study before and after trial of patients with probable or confirmed COVID-19 who sought treatment at the ED of 34 hospitals. The main outcome was an adverse evolution, that is, invasive ventilation or death, occurring within the 7 days after patient admission. The performance of the rule was assessed by the false-negative rate. The impact of the rule implementation was assessed by the absolute differences in the rate of patients who required invasive ventilation or who died and in the rate of patients treated at home, between an observational and an interventional period after implementation of the HOME-CoV rule, with propensity score adjustment. RESULTS: Among 3,000 prospectively enrolled patients, 1,239 (41.3%) demonstrated a negative HOME-CoV rule finding. The false-negative rate of the HOME-CoV rule was 4 in 1,239 (0.32%; 95% CI, 0.13%-0.84%), and its area under the receiver operating characteristic curve was 80.9 (95% CI, 76.5-85.2). On the adjusted populations, 25 of 1,274 patients (1.95%) experienced an adverse evolution during the observational period vs 12 of 1,274 patients (0.95%) during the interventional period: -1.00 (95% CI, -1.86 to -0.15). During the observational period, 858 patients (67.35%) were treated at home vs 871 patients (68.37%) during the interventional period: -1.02 (95% CI, -4.46 to 2.26). INTERPRETATION: A large proportion of patients treated in the ED with probable or confirmed COVID-19 have a negative HOME-CoV rule finding and can be treated safely at home with a very low risk of complications. TRIAL REGISTRY: ClinicalTrials.gov; No.: NCT04338841; URL: www.clinicaltrials.gov.


Subject(s)
Ambulatory Care/methods , COVID-19/therapy , Decision Support Systems, Clinical , Disease Management , Hospitalization/trends , Outpatients , SARS-CoV-2 , Female , Humans , Male , Middle Aged , Patient Discharge/trends
16.
Front Public Health ; 9: 626697, 2021.
Article in English | MEDLINE | ID: covidwho-1247939

ABSTRACT

The coronavirus disease 2019 (COVID-19), caused by the virus SARS-CoV-2, is an acute respiratory disease that has been classified as a pandemic by the World Health Organization (WHO). The sudden spike in the number of infections and high mortality rates have put immense pressure on the public healthcare systems. Hence, it is crucial to identify the key factors for mortality prediction to optimize patient treatment strategy. Different routine blood test results are widely available compared to other forms of data like X-rays, CT-scans, and ultrasounds for mortality prediction. This study proposes machine learning (ML) methods based on blood tests data to predict COVID-19 mortality risk. A powerful combination of five features: neutrophils, lymphocytes, lactate dehydrogenase (LDH), high-sensitivity C-reactive protein (hs-CRP), and age helps to predict mortality with 96% accuracy. Various ML models (neural networks, logistic regression, XGBoost, random forests, SVM, and decision trees) have been trained and performance compared to determine the model that achieves consistently high accuracy across the days that span the disease. The best performing method using XGBoost feature importance and neural network classification, predicts with an accuracy of 90% as early as 16 days before the outcome. Robust testing with three cases based on days to outcome confirms the strong predictive performance and practicality of the proposed model. A detailed analysis and identification of trends was performed using these key biomarkers to provide useful insights for intuitive application. This study provide solutions that would help accelerate the decision-making process in healthcare systems for focused medical treatments in an accurate, early, and reliable manner.


Subject(s)
COVID-19 , Decision Support Systems, Clinical , Humans , Machine Learning , Neural Networks, Computer , SARS-CoV-2
17.
Am J Health Syst Pharm ; 78(21): 1968-1976, 2021 10 25.
Article in English | MEDLINE | ID: covidwho-1246684

ABSTRACT

PURPOSE: The purpose of this manuscript is to describe our experience developing an antimicrobial stewardship (AS) module as a clinical decision support tool in the Epic electronic health record (EHR). SUMMARY: Clinical decision support systems within the EHR can be used to decrease use of broad-spectrum antibiotics, improve antibiotic selection and dosing, decrease adverse effects, reduce antibiotic costs, and reduce the development of antibiotic resistance. The Johns Hopkins Hospital constructed an AS module within Epic. Customized stewardship alerts and scoring systems were developed to triage patients requiring stewardship intervention. This required a multidisciplinary approach with a team comprising AS physicians and pharmacists and Epic information technology personnel, with assistance from clinical microbiology and infection control when necessary. In addition, an intervention database was enhanced with stewardship-specific interventions, and workbench reports were developed specific to AS needs. We herein review the process, advantages, and challenges associated with the development of the Epic AS module. CONCLUSION: Customizing an AS module in an EHR requires significant time and expertise in antimicrobials; however, AS modules have the potential to improve the efficiency of AS personnel in performing daily stewardship activities and reporting through a single system.


Subject(s)
Anti-Infective Agents , Antimicrobial Stewardship , Decision Support Systems, Clinical , Anti-Bacterial Agents/therapeutic use , Electronic Health Records , Humans
18.
Am J Health Syst Pharm ; 78(Supplement_3): S88-S94, 2021 Aug 30.
Article in English | MEDLINE | ID: covidwho-1238180

ABSTRACT

PURPOSE: Automatic therapeutic substitution (ATS) protocols are formulary tools that allow for provider-selected interchange from a nonformulary preadmission medication to a formulary equivalent. Previous studies have demonstrated that the application of clinical decision support (CDS) tools to ATS can decrease ATS errors at admission, but there are limited data describing the impact of CDS on discharge errors. The objective of this study was to describe the impact of CDS-supported interchanges on discharge prescription duplications or omissions. METHODS: This was a single-center, retrospective cohort study conducted at an academic medical center. Patients admitted between June 2017 and August 2019 were included if they were 18 years or older at admission, underwent an ATS protocol-approved interchange for 1 of the 9 included medication classes, and had a completed discharge medication reconciliation. The primary outcome was difference in incidence of therapeutic duplication or omission at discharge between the periods before and after CDS implementation. RESULTS: A total of 737 preimplementation encounters and 733 postimplementation encounters were included. CDS did not significantly decrease the incidence of discharge duplications or omissions (12.1% vs 11.2%; 95% confidence interval [CI], -2.3% to 4.2%) nor the incidence of admission duplication or inappropriate reconciliation (21.4% vs 20.7%; 95% CI, -3.4% to 4.8%) when comparing the pre- and postimplementation periods. Inappropriate reconciliation was the primary cause of discharge medication errors for both groups. CONCLUSION: CDS implementation was not associated with a decrease in discharge omissions, duplications, or inappropriate reconciliation. Findings highlight the need for thoughtful medication reconciliation at the point of discharge.


Subject(s)
Decision Support Systems, Clinical , Patient Discharge , Hospitals , Humans , Medication Reconciliation , Retrospective Studies
19.
Recenti Prog Med ; 112(5): 387-391, 2021 05.
Article in Italian | MEDLINE | ID: covidwho-1232492

ABSTRACT

INTRODUCTION: The unprecedented covid-19 pandemic has shown the weaknesses of health systems and opened new spaces for e-health and telemedicine. Recent literature states that chatbots, if implemented effectively, could be useful tools for quickly sharing information, promoting healthy behaviors, and helping reduce the psychological burden of isolation. The aim of this project is to develop and test a secure and reliable computerized decision support system (CDSS) in web-app and evaluate its use, usability and its outputs in a pre-specified way. METHODS: A multidisciplinary team was recruited to plan and design, based on the SMASS medical CDSS, the scenarios of the COVID-Guide web-app, a self-triage system for patients with suspected covid-19. The output data for the period May-September 2020 from Germany were analyzed. RESULTS: During the period under review, the total number of consultations in Germany was 96,012. 3,415 (3.56%) consultations indicated the need for immediate evaluation, by activating the emergency service (calling an ambulance) - 1,942, equal to 2.02% - or by advising the patient to go to hospital - 1,743, equal to 1.54%. CONCLUSIONS: Data seems to show good usability and a consistent number of consultations carried out. Regular use of COVID-Guide could help collect epidemiological data on the spread of (suspected) covid-19 cases, easily and quickly available in all countries where the tool will be used. Using the SSDC could help reduce the load on operators. Furthermore, the use of anonymous and geolocatable clinical data together with the generation of alerts and indicators produced by COVID-Guide could become a useful tool for epidemiological surveillance in the future phases of the pandemic (Telemedical Syndromic Surveillance).


Subject(s)
COVID-19/therapy , Decision Support Systems, Clinical , Mobile Applications , Triage/methods , Ambulances/statistics & numerical data , COVID-19/epidemiology , Germany/epidemiology , Hospitalization/statistics & numerical data , Humans , Pandemics
20.
Sci Rep ; 11(1): 9626, 2021 05 05.
Article in English | MEDLINE | ID: covidwho-1217712

ABSTRACT

Early classification and risk assessment for COVID-19 patients are critical for improving their terminal prognosis, and preventing the patients deteriorate into severe or critical situation. We performed a retrospective study on 222 COVID-19 patients in Wuhan treated between January 23rd and February 28th, 2020. A decision tree algorithm has been established including multiple factor logistic for cluster analyses that were performed to assess the predictive value of presumptive clinical diagnosis and features including characteristic signs and symptoms of COVID-19 patients. Therapeutic efficacy was evaluated by adopting Kaplan-Meier survival curve analysis and cox risk regression. The 222 patients were then clustered into two groups: cluster I (common type) and cluster II (high-risk type). High-risk cases can be judged from their clinical characteristics, including: age > 50 years, chest CT images with multiple ground glass or wetting shadows, etc. Based on the classification analysis and risk factor analysis, a decision tree algorithm and management flow chart were established, which can help well recognize individuals who needs hospitalization and improve the clinical prognosis of the COVID-19 patients. Our risk factor analysis and management process suggestions are useful for improving the overall clinical prognosis and optimize the utilization of public health resources during treatment of COVID-19 patients.


Subject(s)
COVID-19/drug therapy , Aged , Antiviral Agents/therapeutic use , COVID-19/epidemiology , COVID-19/etiology , COVID-19/therapy , China/epidemiology , Cluster Analysis , Comorbidity , Decision Support Systems, Clinical , Female , Humans , Kaplan-Meier Estimate , Male , Middle Aged , Prognosis , Retrospective Studies , Treatment Outcome
SELECTION OF CITATIONS
SEARCH DETAIL
...