Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 156
Filter
1.
BMC Med Inform Decis Mak ; 24(1): 188, 2024 Jul 04.
Article in English | MEDLINE | ID: mdl-38965569

ABSTRACT

BACKGROUND: Medication errors and associated adverse drug events (ADE) are a major cause of morbidity and mortality worldwide. In recent years, the prevention of medication errors has become a high priority in healthcare systems. In order to improve medication safety, computerized Clinical Decision Support Systems (CDSS) are increasingly being integrated into the medication process. Accordingly, a growing number of studies have investigated the medication safety-related effectiveness of CDSS. However, the outcome measures used are heterogeneous, leading to unclear evidence. The primary aim of this study is to summarize and categorize the outcomes used in interventional studies evaluating the effects of CDSS on medication safety in primary and long-term care. METHODS: We systematically searched PubMed, Embase, CINAHL, and Cochrane Library for interventional studies evaluating the effects of CDSS targeting medication safety and patient-related outcomes. We extracted methodological characteristics, outcomes and empirical findings from the included studies. Outcomes were assigned to three main categories: process-related, harm-related, and cost-related. Risk of bias was assessed using the Evidence Project risk of bias tool. RESULTS: Thirty-two studies met the inclusion criteria. Almost all studies (n = 31) used process-related outcomes, followed by harm-related outcomes (n = 11). Only three studies used cost-related outcomes. Most studies used outcomes from only one category and no study used outcomes from all three categories. The definition and operationalization of outcomes varied widely between the included studies, even within outcome categories. Overall, evidence on CDSS effectiveness was mixed. A significant intervention effect was demonstrated by nine of fifteen studies with process-related primary outcomes (60%) but only one out of five studies with harm-related primary outcomes (20%). The included studies faced a number of methodological problems that limit the comparability and generalizability of their results. CONCLUSIONS: Evidence on the effectiveness of CDSS is currently inconclusive due in part to inconsistent outcome definitions and methodological problems in the literature. Additional high-quality studies are therefore needed to provide a comprehensive account of CDSS effectiveness. These studies should follow established methodological guidelines and recommendations and use a comprehensive set of harm-, process- and cost-related outcomes with agreed-upon and consistent definitions. PROSPERO REGISTRATION: CRD42023464746.


Subject(s)
Decision Support Systems, Clinical , Long-Term Care , Medication Errors , Primary Health Care , Humans , Decision Support Systems, Clinical/standards , Medication Errors/prevention & control , Long-Term Care/standards , Primary Health Care/standards , Patient Safety/standards , Drug-Related Side Effects and Adverse Reactions/prevention & control , Outcome Assessment, Health Care
2.
BMJ Health Care Inform ; 31(1)2024 Jul 01.
Article in English | MEDLINE | ID: mdl-38955390

ABSTRACT

BACKGROUND: The detrimental repercussions of the COVID-19 pandemic on the quality of care and clinical outcomes for patients with acute coronary syndrome (ACS) necessitate a rigorous re-evaluation of prognostic prediction models in the context of the pandemic environment. This study aimed to elucidate the adaptability of prediction models for 30-day mortality in patients with ACS during the pandemic periods. METHODS: A total of 2041 consecutive patients with ACS were included from 32 institutions between December 2020 and April 2023. The dataset comprised patients who were admitted for ACS and underwent coronary angiography for the diagnosis during hospitalisation. The prediction accuracy of the Global Registry of Acute Coronary Events (GRACE) and a machine learning model, KOTOMI, was evaluated for 30-day mortality in patients with ST-elevation acute myocardial infarction (STEMI) and non-ST-elevation acute coronary syndrome (NSTE-ACS). RESULTS: The area under the receiver operating characteristics curve (AUROC) was 0.85 (95% CI 0.81 to 0.89) in the GRACE and 0.87 (95% CI 0.82 to 0.91) in the KOTOMI for STEMI. The difference of 0.020 (95% CI -0.098-0.13) was not significant. For NSTE-ACS, the respective AUROCs were 0.82 (95% CI 0.73 to 0.91) in the GRACE and 0.83 (95% CI 0.74 to 0.91) in the KOTOMI, also demonstrating insignificant difference of 0.010 (95% CI -0.023 to 0.25). The prediction accuracy of both models had consistency in patients with STEMI and insignificant variation in patients with NSTE-ACS between the pandemic periods. CONCLUSIONS: The prediction models maintained high accuracy for 30-day mortality of patients with ACS even in the pandemic periods, despite marginal variation observed.


Subject(s)
Acute Coronary Syndrome , COVID-19 , Humans , Acute Coronary Syndrome/mortality , COVID-19/epidemiology , COVID-19/mortality , Female , Male , Prognosis , Aged , Middle Aged , Machine Learning , SARS-CoV-2 , ST Elevation Myocardial Infarction/mortality , Coronary Angiography , ROC Curve , Registries , Pandemics
3.
BMJ Health Care Inform ; 31(1)2024 Jun 19.
Article in English | MEDLINE | ID: mdl-38901863

ABSTRACT

OBJECTIVES: Risk stratification tools that predict healthcare utilisation are extensively integrated into primary care systems worldwide, forming a key component of anticipatory care pathways, where high-risk individuals are targeted by preventative interventions. Existing work broadly focuses on comparing model performance in retrospective cohorts with little attention paid to efficacy in reducing morbidity when deployed in different global contexts. We review the evidence supporting the use of such tools in real-world settings, from retrospective dataset performance to pathway evaluation. METHODS: A systematic search was undertaken to identify studies reporting the development, validation and deployment of models that predict healthcare utilisation in unselected primary care cohorts, comparable to their current real-world application. RESULTS: Among 3897 articles screened, 51 studies were identified evaluating 28 risk prediction models. Half underwent external validation yet only two were validated internationally. No association between validation context and model discrimination was observed. The majority of real-world evaluation studies reported no change, or indeed significant increases, in healthcare utilisation within targeted groups, with only one-third of reports demonstrating some benefit. DISCUSSION: While model discrimination appears satisfactorily robust to application context there is little evidence to suggest that accurate identification of high-risk individuals can be reliably translated to improvements in service delivery or morbidity. CONCLUSIONS: The evidence does not support further integration of care pathways with costly population-level interventions based on risk prediction in unselected primary care cohorts. There is an urgent need to independently appraise the safety, efficacy and cost-effectiveness of risk prediction systems that are already widely deployed within primary care.


Subject(s)
Algorithms , Patient Acceptance of Health Care , Primary Health Care , Humans , Risk Assessment , Patient Acceptance of Health Care/statistics & numerical data
4.
Circ Cardiovasc Qual Outcomes ; : e010359, 2024 Feb 06.
Article in English | MEDLINE | ID: mdl-38318703

ABSTRACT

BACKGROUND: There are multiple risk assessment models (RAMs) for venous thromboembolism prophylaxis, but it is unknown whether they increase appropriate prophylaxis. METHODS: To determine the impact of a RAM embedded in the electronic health record, we conducted a stepped-wedge hospital-level cluster-randomized trial conducted from October 1, 2017 to February 28, 2019 at 10 Cleveland Clinic hospitals. We included consecutive general medical patients aged 18 years or older. Patients were excluded if they had a contraindication to prophylaxis, including anticoagulation for another condition, acute bleeding, or comfort-only care. A RAM was embedded in the general admission order set and physicians were encouraged to use it. The decisions to use the RAM and act on the results were reserved to the treating physician. The primary outcome was the percentage of patients receiving appropriate prophylaxis (high-risk patients with pharmacological thromboprophylaxis plus low-risk patients without prophylaxis) within 48 hours of hospitalization. Secondary outcomes included total patients receiving prophylaxis, venous thromboembolism among high-risk patients at 14 and 45 days, major bleeding, heparin-induced thrombocytopenia, and length of stay. Mixed-effects models were used to analyze the study outcomes. RESULTS: A total of 26 506 patients (mean age, 61; 52% female; 73% White) were analyzed, including 11 134 before and 15 406 after implementation of the RAM. After implementation, the RAM was used for 24% of patients, and the percentage of patients receiving appropriate prophylaxis increased from 43.1% to 48.8% (adjusted odds ratio, 1.11 [1.00-1.23]), while overall prophylaxis use decreased from 73.5% to 65.2% (adjusted odds ratio, 0.87 [0.78-0.97]). Rates of venous thromboembolism among high-risk patients (adjusted odds ratio, 0.72 [0.38-1.36]), rates of bleeding and heparin-induced thrombocytopenia (adjusted odds ratio, 0.19 [0.02-1.47]), and length of stay were unchanged. CONCLUSIONS: Implementation of a RAM for venous thromboembolism increased appropriate prophylaxis use, but the RAM was used for a minority of patients. REGISTRATION: URL: https://www.clinicaltrials.gov/study/NCT03243708?term=nct03243708&rank=1; Unique identifier: NCT03243708.

5.
Br J Clin Pharmacol ; 90(4): 1152-1161, 2024 Apr.
Article in English | MEDLINE | ID: mdl-38294057

ABSTRACT

AIMS: We aim to examine and understand the work processes of antimicrobial stewardship (AMS) teams across 2 hospitals that use the same digital intervention, and to identify the barriers and enablers to effective AMS in each setting. METHODS: Employing a contextual inquiry approach informed by the Systems Engineering Initiative for Patient Safety (SEIPS) model, observations and semistructured interviews were conducted with AMS team members (n = 15) in 2 Australian hospitals. Qualitative data analysis was conducted, mapping themes to the SEIPS framework. RESULTS: Both hospitals utilized similar systems, however, they displayed variations in AMS processes, particularly in postprescription review, interdepartmental AMS meetings and the utilization of digital tools. An antimicrobial dashboard was available at both hospitals but was utilized more at the hospital where the AMS team members were involved in the dashboard's development, and there were user champions. At the hospital where the dashboard was utilized less, participants were unaware of key features, and interoperability issues were observed. Establishing strong relationships between the AMS team and prescribers emerged as key to effective AMS at both hospitals. However, organizational and cultural differences were found, with 1 hospital reporting insufficient support from executive leadership, increased prescriber autonomy and resource constraints. CONCLUSION: Organizational and cultural elements, such as executive support, resource allocation and interdepartmental relationships, played a crucial role in achieving AMS goals. System interoperability and user champions further promoted the adoption of digital tools, potentially improving AMS outcomes through increased user engagement and acceptance.


Subject(s)
Anti-Infective Agents , Antimicrobial Stewardship , Humans , Australia , Hospitals , Qualitative Research
6.
Einstein (Säo Paulo) ; 22: eAO0328, 2024. tab, graf
Article in English | LILACS-Express | LILACS | ID: biblio-1534330

ABSTRACT

ABSTRACT Objective: To develop and validate predictive models to estimate the number of COVID-19 patients hospitalized in the intensive care units and general wards of a private not-for-profit hospital in São Paulo, Brazil. Methods: Two main models were developed. The first model calculated hospital occupation as the difference between predicted COVID-19 patient admissions, transfers between departments, and discharges, estimating admissions based on their weekly moving averages, segmented by general wards and intensive care units. Patient discharge predictions were based on a length of stay predictive model, assessing the clinical characteristics of patients hospitalized with COVID-19, including age group and usage of mechanical ventilation devices. The second model estimated hospital occupation based on the correlation with the number of telemedicine visits by patients diagnosed with COVID-19, utilizing correlational analysis to define the lag that maximized the correlation between the studied series. Both models were monitored for 365 days, from May 20th, 2021, to May 20th, 2022. Results: The first model predicted the number of hospitalized patients by department within an interval of up to 14 days. The second model estimated the total number of hospitalized patients for the following 8 days, considering calls attended by Hospital Israelita Albert Einstein's telemedicine department. Considering the average daily predicted values for the intensive care unit and general ward across a forecast horizon of 8 days, as limited by the second model, the first and second models obtained R² values of 0.900 and 0.996, respectively and mean absolute errors of 8.885 and 2.524 beds, respectively. The performances of both models were monitored using the mean error, mean absolute error, and root mean squared error as a function of the forecast horizon in days. Conclusion: The model based on telemedicine use was the most accurate in the current analysis and was used to estimate COVID-19 hospital occupancy 8 days in advance, validating predictions of this nature in similar clinical contexts. The results encourage the expansion of this method to other pathologies, aiming to guarantee the standards of hospital care and conscious consumption of resources.

8.
Mult Scler Relat Disord ; 80: 105092, 2023 Dec.
Article in English | MEDLINE | ID: mdl-37931489

ABSTRACT

BACKGROUND: Disease modifying therapies (DMTs) offer opportunities to improve the course of multiple sclerosis (MS), but decisions about treatment are difficult. People with multiple sclerosis (pwMS) want more involvement in decisions about DMTs, but new approaches are needed to support shared decision-making (SDM) because of the number of treatment options and the range of outcomes affected by treatment. We designed a patient-centered tool, MS-SUPPORT, to facilitate SDM for pwMS. We sought to evaluate the feasibility and impact of MS-SUPPORT on decisions about disease modifying treatments (DMTs), SDM processes, and quality-of-life. METHODS: This multisite randomized controlled trial compared the SDM intervention (MS-SUPPORT) to control (usual care) over a 12-month period. English-speaking adults with relapsing MS were eligible if they had an upcoming MS appointment and an email address. To evaluate clinician perspectives, participants' MS clinicians were invited to participate. Patients were referred between November 11, 2019 and October 23, 2020 by their MS clinician or a patient advocacy organization (the Multiple Sclerosis Association of America). MS-SUPPORT is an online, interactive, evidence-based decision aid that was co-created with pwMS. It clarifies patient treatment goals and values and provides tailored information about MS, DMTs, and adherence. Viewed by patients before their clinic appointment, MS-SUPPORT generates a personalized summary of the patient's treatment goals and preferences, adherence, DMT use, and clinical situation to share with their MS clinician. Outcomes (DMT utilization, adherence, quality-of-life, and SDM) were assessed at enrollment, post-MS-SUPPORT, post-appointment, and quarterly for 1 year. RESULTS: Participants included 501 adults with MS from across the USA (84.6% female, 83% white) and 34 of their MS clinicians (47% neurologists, 41% Nurse Practitioners, 12% Physician Assistants). Among the 203 patients who completed MS-SUPPORT, most (88.2%) reported they would recommend it to others and that it helped them talk to their doctor (85.2%), understand their options (82.3%) and the importance of taking DMTs as prescribed (82.3%). Among non-users of DMTs at baseline, the probability ratio of current DMT use consistently trended higher over one-year follow-up in the MS-SUPPORT group (1.30 [0.86-1.96]), as did the cumulative probability of starting a DMT within 6-months, with shorter time-to-start (46 vs 90 days, p=0.24). Among the 222 responses from 34 participating clinicians, more clinicians in the MS-SUPPORT group (vs control) trended towards recommending their patient start a DMT (9 of 108 (8%) vs 5 of 109 (5%), respectively, p=0.26). Adherence (no missed doses) to daily-dosed DMTs was higher in the MS-SUPPORT group (81.25% vs 56.41%, p=.026). Fewer patients forgot their doses (p=.046). The MS-SUPPORT group (vs control) reported 1.7 fewer days/month of poor mental health (p=0.02). CONCLUSIONS: MS-SUPPORT was strongly endorsed by patients and is feasible to use in clinical settings. MS-SUPPORT increased the short-term probability of taking and adhering to a DMT, and improved long-term mental health. Study limitations include selection bias, response bias, social desirability bias, and recall bias. Exploring approaches to reinforcement and monitoring its implementation in real-world settings should provide further insights into the value and utility of this new SDM tool.


Subject(s)
Multiple Sclerosis , Physicians , Adult , Humans , Female , Male , Multiple Sclerosis/drug therapy , Decision Making, Shared , Quality of Life
9.
BMJ Health Care Inform ; 30(1)2023 Sep.
Article in English | MEDLINE | ID: mdl-37709302

ABSTRACT

OBJECTIVE: To identify the risk of acute respiratory distress syndrome (ARDS) and in-hospital mortality using long short-term memory (LSTM) framework in a mechanically ventilated (MV) non-COVID-19 cohort and a COVID-19 cohort. METHODS: We included MV ICU patients between 2017 and 2018 and reviewed patient records for ARDS and death. Using active learning, we enriched this cohort with MV patients from 2016 to 2019 (MV non-COVID-19, n=3905). We collected a second validation cohort of hospitalised patients with COVID-19 in 2020 (COVID+, n=5672). We trained an LSTM model using 132 structured features on the MV non-COVID-19 training cohort and validated on the MV non-COVID-19 validation and COVID-19 cohorts. RESULTS: Applying LSTM (model score 0.9) on the MV non-COVID-19 validation cohort had a sensitivity of 86% and specificity of 57%. The model identified the risk of ARDS 10 hours before ARDS and 9.4 days before death. The sensitivity (70%) and specificity (84%) of the model on the COVID-19 cohort are lower than MV non-COVID-19 cohort. For the COVID-19 + cohort and MV COVID-19 + patients, the model identified the risk of in-hospital mortality 2.4 days and 1.54 days before death, respectively. DISCUSSION: Our LSTM algorithm accurately and timely identified the risk of ARDS or death in MV non-COVID-19 and COVID+ patients. By alerting the risk of ARDS or death, we can improve the implementation of evidence-based ARDS management and facilitate goals-of-care discussions in high-risk patients. CONCLUSION: Using the LSTM algorithm in hospitalised patients identifies the risk of ARDS or death.


Subject(s)
COVID-19 , Respiratory Distress Syndrome , Humans , Hospital Mortality , Memory, Short-Term , Algorithms
10.
Trials ; 24(1): 577, 2023 Sep 09.
Article in English | MEDLINE | ID: mdl-37684688

ABSTRACT

INTRODUCTION: Multidisciplinary team meetings (MDMs), also known as tumor conferences, are a cornerstone of cancer treatments. However, barriers such as incomplete patient information or logistical challenges can postpone tumor board decisions and delay patient treatment, potentially affecting clinical outcomes. Therapeutic Assistance and Decision algorithms for hepatobiliary tumor Boards (ADBoard) aims to reduce this delay by providing automated data extraction and high-quality, evidence-based treatment recommendations. METHODS AND ANALYSIS: With the help of natural language processing, relevant patient information will be automatically extracted from electronic medical records and used to complete a classic tumor conference protocol. A machine learning model is trained on retrospective MDM data and clinical guidelines to recommend treatment options for patients in our inclusion criteria. Study participants will be randomized to either MDM with ADBoard (Arm A: MDM-AB) or conventional MDM (Arm B: MDM-C). The concordance of recommendations of both groups will be compared using interrater reliability. We hypothesize that the therapy recommendations of ADBoard would be in high agreement with those of the MDM-C, with a Cohen's kappa value of ≥ 0.75. Furthermore, our secondary hypotheses state that the completeness of patient information presented in MDM is higher when using ADBoard than without, and the explainability of tumor board protocols in MDM-AB is higher compared to MDM-C as measured by the System Causability Scale. DISCUSSION: The implementation of ADBoard aims to improve the quality and completeness of the data required for MDM decision-making and to propose therapeutic recommendations that consider current medical evidence and guidelines in a transparent and reproducible manner. ETHICS AND DISSEMINATION: The project was approved by the Ethics Committee of the Charité - Universitätsmedizin Berlin. REGISTRATION DETAILS: The study was registered on ClinicalTrials.gov (trial identifying number: NCT05681949; https://clinicaltrials.gov/study/NCT05681949 ) on 12 January 2023.


Subject(s)
Liver Neoplasms , Humans , Reproducibility of Results , Retrospective Studies , Liver Neoplasms/diagnosis , Liver Neoplasms/therapy , Algorithms , Patient Care Team , Randomized Controlled Trials as Topic
11.
BMJ Health Care Inform ; 30(1)2023 Sep.
Article in English | MEDLINE | ID: mdl-37730251

ABSTRACT

OBJECTIVE: The study aimed to measure the validity of International Classification of Diseases, 10th Edition (ICD-10) code F44.5 for functional seizure disorder (FSD) in the Veterans Affairs Connecticut Healthcare System electronic health record (VA EHR). METHODS: The study used an informatics search tool, a natural language processing algorithm and a chart review to validate FSD coding. RESULTS: The positive predictive value (PPV) for code F44.5 was calculated to be 44%. DISCUSSION: ICD-10 introduced a specific code for FSD to improve coding validity. However, results revealed a meager (44%) PPV for code F44.5. Evaluation of the low diagnostic precision of FSD identified inconsistencies in the ICD-10 and VA EHR systems. CONCLUSION: Information system improvements may increase the precision of diagnostic coding by clinicians. Specifically, the EHR problem list should include commonly used diagnostic codes and an appropriately curated ICD-10 term list for 'seizure disorder,' and a single ICD code for FSD should be classified under neurology and psychiatry.


Subject(s)
Epilepsy , International Classification of Diseases , Humans , Algorithms , Electronic Health Records , Epilepsy/diagnosis , Natural Language Processing
12.
BMJ Health Care Inform ; 30(1)2023 Sep.
Article in English | MEDLINE | ID: mdl-37751942

ABSTRACT

BACKGROUND: Treat-to-target (T2T) is a therapeutic strategy currently being studied for its application in systemic lupus erythematosus (SLE). Patients and rheumatologists have little support in making the best treatment decision in the context of a T2T strategy, thus, the use of information technology for systematically processing data and supporting information and knowledge may improve routine decision-making practices, helping to deliver value-based care. OBJECTIVE: To design and develop an online Clinical Decision Support Systems (CDSS) tool "SLE-T2T", and test its usability for the implementation of a T2T strategy in the management of patients with SLE. METHODS: A prototype of a CDSS was conceived as a web-based application with the task of generating appropriate treatment advice based on entered patients' data. Once developed, a System Usability Score (SUS) questionnaire was implemented to test whether the eHealth tool was user-friendly, comprehensible, easy-to-deliver and workflow-oriented. Data from the participants' comments were synthesised, and the elements in need for improvement were identified. RESULTS: The beta version web-based system was developed based on the interim usability and acceptance evaluation. 7 participants completed the SUS survey. The median SUS score of SLE-T2T was 79 (scale 0 to 100), categorising the application as 'good' and indicating the need for minor improvements to the design. CONCLUSIONS: SLE-T2T is the first eHealth tool to be designed for the management of SLE patients in a T2T context. The SUS score and unstructured feedback showed high acceptance of this digital instrument for its future use in a clinical trial.


Subject(s)
Decision Support Systems, Clinical , Lupus Erythematosus, Systemic , Mobile Applications , Telemedicine , Humans , Lupus Erythematosus, Systemic/drug therapy , Internet
14.
BMJ Health Care Inform ; 30(1)2023 Aug.
Article in English | MEDLINE | ID: mdl-37558245

ABSTRACT

BACKGROUND: Predictive models have been used in clinical care for decades. They can determine the risk of a patient developing a particular condition or complication and inform the shared decision-making process. Developing artificial intelligence (AI) predictive models for use in clinical practice is challenging; even if they have good predictive performance, this does not guarantee that they will be used or enhance decision-making. We describe nine stages of developing and evaluating a predictive AI model, recognising the challenges that clinicians might face at each stage and providing practical tips to help manage them. FINDINGS: The nine stages included clarifying the clinical question or outcome(s) of interest (output), identifying appropriate predictors (features selection), choosing relevant datasets, developing the AI predictive model, validating and testing the developed model, presenting and interpreting the model prediction(s), licensing and maintaining the AI predictive model and evaluating the impact of the AI predictive model. The introduction of an AI prediction model into clinical practice usually consists of multiple interacting components, including the accuracy of the model predictions, physician and patient understanding and use of these probabilities, expected effectiveness of subsequent actions or interventions and adherence to these. Much of the difference in whether benefits are realised relates to whether the predictions are given to clinicians in a timely way that enables them to take an appropriate action. CONCLUSION: The downstream effects on processes and outcomes of AI prediction models vary widely, and it is essential to evaluate the use in clinical practice using an appropriate study design.


Subject(s)
Artificial Intelligence , Clinical Decision-Making , Humans , Research Design
15.
BMC Med Ethics ; 24(1): 48, 2023 07 06.
Article in English | MEDLINE | ID: mdl-37415172

ABSTRACT

BACKGROUND: Healthcare providers have to make ethically complex clinical decisions which may be a source of stress. Researchers have recently introduced Artificial Intelligence (AI)-based applications to assist in clinical ethical decision-making. However, the use of such tools is controversial. This review aims to provide a comprehensive overview of the reasons given in the academic literature for and against their use. METHODS: PubMed, Web of Science, Philpapers.org and Google Scholar were searched for all relevant publications. The resulting set of publications was title and abstract screened according to defined inclusion and exclusion criteria, resulting in 44 papers whose full texts were analysed using the Kuckartz method of qualitative text analysis. RESULTS: Artificial Intelligence might increase patient autonomy by improving the accuracy of predictions and allowing patients to receive their preferred treatment. It is thought to increase beneficence by providing reliable information, thereby, supporting surrogate decision-making. Some authors fear that reducing ethical decision-making to statistical correlations may limit autonomy. Others argue that AI may not be able to replicate the process of ethical deliberation because it lacks human characteristics. Concerns have been raised about issues of justice, as AI may replicate existing biases in the decision-making process. CONCLUSIONS: The prospective benefits of using AI in clinical ethical decision-making are manifold, but its development and use should be undertaken carefully to avoid ethical pitfalls. Several issues that are central to the discussion of Clinical Decision Support Systems, such as justice, explicability or human-machine interaction, have been neglected in the debate on AI for clinical ethics so far. TRIAL REGISTRATION: This review is registered at Open Science Framework ( https://osf.io/wvcs9 ).


Subject(s)
Artificial Intelligence , Clinical Decision-Making , Humans , Beneficence
16.
Br J Haematol ; 202(5): 1011-1017, 2023 09.
Article in English | MEDLINE | ID: mdl-37271143

ABSTRACT

Appropriate evaluation of heparin-induced thrombocytopenia (HIT) is imperative because of the potentially life-threatening complications. However, overtesting and overdiagnosis of HIT are common. Our goal was to evaluate the impact of clinical decision support (CDS) based on the HIT computerized-risk (HIT-CR) score, designed to reduce unnecessary diagnostic testing. This retrospective observational study evaluated CDS that presented a platelet count versus time graph and 4Ts score calculator to clinicians who initiated a HIT immunoassay order in patients with predicted low risk (HIT-CR score 0-2). The primary outcome was the proportion of immunoassay orders initiated but cancelled after firing of the CDS advisory. Chart reviews were conducted to assess anticoagulation usage, 4Ts scores and the proportion of patients who had HIT. In a 20-week period, 319 CDS advisories were presented to users who initiated potentially unnecessary HIT diagnostic testing. The diagnostic test order was discontinued in 80 (25%) patients. Heparin products were continued in 139 (44%) patients, and alternative anticoagulation was not given to 264 (83%). The negative predictive value of the advisory was 98.8% (95% CI: 97.2-99.5). HIT-CR score-based CDS can reduce unnecessary diagnostic testing for HIT in patients with a low pretest probability of HIT.


Subject(s)
Decision Support Systems, Clinical , Thrombocytopenia , Humans , Thrombocytopenia/chemically induced , Thrombocytopenia/diagnosis , Heparin/adverse effects , Platelet Count , Predictive Value of Tests , Anticoagulants/adverse effects
17.
BMJ Health Care Inform ; 30(1)2023 May.
Article in English | MEDLINE | ID: mdl-37130626

ABSTRACT

OBJECTIVE: Clinical decision support systems (CDSSs) can reduce medical errors increasing drug prescription appropriateness. Deepening knowledge of existing CDSSs could increase their use by healthcare professionals in different settings (ie, hospitals, pharmacies, health research centres) of clinical practice. This review aims to identify the characteristics common to effective studies conducted with CDSSs. MATERIALS AND METHODS: The article sources were Scopus, PubMed, Ovid MEDLINE and Web of Science, queried between January 2017 and January 2022. Inclusion criteria were prospective and retrospective studies that reported original research on CDSSs for clinical practice support; studies should describe a measurable comparison of the intervention or observation conducted with and without the CDSS; article language Italian or English. Reviews and studies with CDSSs used exclusively by patients were excluded. A Microsoft Excel spreadsheet was prepared to extract and summarise data from the included articles. RESULTS: The search resulted in the identification of 2424 articles. After title and abstract screening, 136 studies remained, 42 of which were included for final evaluation. Most of the studies included rule-based CDSSs that are integrated into existing databases with the main purpose of managing disease-related problems. The majority of the selected studies (25 studies; 59.5%) were successful in supporting clinical practice, with most being pre-post intervention studies and involving the presence of a pharmacist. DISCUSSION AND CONCLUSION: A number of characteristics have been identified that may help the design of studies feasible to demonstrate the effectiveness of CDSSs. Further studies are needed to encourage CDSS use.


Subject(s)
Decision Support Systems, Clinical , Humans , Prospective Studies , Retrospective Studies , Drug Prescriptions
18.
Int J Med Inform ; 175: 105091, 2023 07.
Article in English | MEDLINE | ID: mdl-37182411

ABSTRACT

OBJECTIVE: Two tools are currently available in the literature to evaluate the usability of medication alert systems, the instrument for evaluating human factors principles in medication-related decision support alerts (I-MeDeSA) and the tool for evaluating medication alerting systems (TEMAS). This study aimed to compare their convergent validity, perceived usability, usefulness, strengths, and weaknesses, as well as users' preferences. METHOD: To evaluate convergent validity, two experts mapped TEMAS' items against I-MeDeSA's items with respect to the usability dimensions they target. To assess perceived usability, usefulness, strengths, and weaknesses of both tools, staff with expertise in their medication alerting system were asked to use French versions of the TEMAS and I-MeDeSA. After the use of each tool, participants were asked to complete the System Usability Scale (SUS) and answer questions about the understandability and usefulness of each tool. Finally, participants were asked to name their preferred tool. Numeric scores were statistically compared. Free-text responses were analyzed using an inductive approach. RESULTS: Forty-five participants from 10 hospitals took part in the study. In terms of convergent validity, I-MeDeSA focuses more on the usability of the graphical user interface while TEMAS considers a wider range of usability principles. Both tools have a fair level of perceived usability (I-MeDeSA' SUS score = 61.85 and TEMAS' SUS score = 62.87), but results highlight that revisions are necessary to both tools to improve their usability. Participants found TEMAS more useful than I-MeDeSA (t = -3.63, p =.005) and had a clear preference for TEMAS to identify problems in formative evaluation (39 of 45; 0.867, p <.001) and to compare the usability of alert systems during the procurement process (36 of 45; 0.8, p <.001). CONCLUSIONS: The TEMAS is perceived as more useful and is preferred by participants. The I-MeDeSA seems more relevant for quick evaluations that focus on the graphical user interface. The TEMAS seems to be more suitable for in-depth usability evaluations of alert systems. Even if both tools are perceived to be equally usable, they suffer from wording, instructional, and organizational problems that hinder their use. The results of this study will be used to improve the design of I-MeDeSA and TEMAS.


Subject(s)
Decision Support Systems, Clinical , Medical Order Entry Systems , Humans , User-Computer Interface
19.
BMJ Health Care Inform ; 30(1)2023 May.
Article in English | MEDLINE | ID: mdl-37169397

ABSTRACT

Sepsis is a worldwide public health problem. Rapid identification is associated with improved patient outcomes-if followed by timely appropriate treatment. OBJECTIVES: Describe digital sepsis alerts (DSAs) in use in English National Health Service (NHS) acute hospitals. METHODS: A Freedom of Information request surveyed acute NHS Trusts on their adoption of electronic patient records (EPRs) and DSAs. RESULTS: Of the 99 Trusts that responded, 84 had an EPR. Over 20 different EPR system providers were identified as operational in England. The most common providers were Cerner (21%). System C, Dedalus and Allscripts Sunrise were also relatively common (13%, 10% and 7%, respectively). 70% of NHS Trusts with an EPR responded that they had a DSA; most of these use the National Early Warning Score (NEWS2). There was evidence that the EPR provider was related to the DSA algorithm. We found no evidence that Trusts were using EPRs to introduce data driven algorithms or DSAs able to include, for example, pre-existing conditions that may be known to increase risk.Not all Trusts were willing or able to provide details of their EPR or the underlying algorithm. DISCUSSION: The majority of NHS Trusts use an EPR of some kind; many use a NEWS2-based DSA in keeping with national guidelines. CONCLUSION: Many English NHS Trusts use DSAs; even those using similar triggers vary and many recreate paper systems. Despite the proliferation of machine learning algorithms being developed to support early detection of sepsis, there is little evidence that these are being used to improve personalised sepsis detection.


Subject(s)
Sepsis , State Medicine , Humans , Prevalence , England , Hospitals , Sepsis/diagnosis , Sepsis/epidemiology
20.
BMJ Health Care Inform ; 30(1)2023 May.
Article in English | MEDLINE | ID: mdl-37217249

ABSTRACT

OBJECTIVES: Artificial intelligence (AI) is increasingly tested and integrated into breast cancer screening. Still, there are unresolved issues regarding its possible ethical, social and legal impacts. Furthermore, the perspectives of different actors are lacking. This study investigates the views of breast radiologists on AI-supported mammography screening, with a focus on attitudes, perceived benefits and risks, accountability of AI use, and potential impact on the profession. METHODS: We conducted an online survey of Swedish breast radiologists. As early adopter of breast cancer screening, and digital technologies, Sweden is a particularly interesting case to study. The survey had different themes, including: attitudes and responsibilities pertaining to AI, and AI's impact on the profession. Responses were analysed using descriptive statistics and correlation analyses. Free texts and comments were analysed using an inductive approach. RESULTS: Overall, respondents (47/105, response rate 44.8%) were highly experienced in breast imaging and had a mixed knowledge of AI. A majority (n=38, 80.8%) were positive/somewhat positive towards integrating AI in mammography screening. Still, many considered there to be potential risks to a high/somewhat high degree (n=16, 34.1%) or were uncertain (n=16, 34.0%). Several important uncertainties were identified, such as defining liable actor(s) when AI is integrated into medical decision-making. CONCLUSIONS: Swedish breast radiologists are largely positive towards integrating AI in mammography screening, but there are significant uncertainties that need to be addressed, especially regarding risks and responsibilities. The results stress the importance of understanding actor-specific and context-specific challenges to responsible implementation of AI in healthcare.


Subject(s)
Artificial Intelligence , Breast Neoplasms , Humans , Female , Sweden , Mammography/methods , Breast Neoplasms/diagnostic imaging , Radiologists
SELECTION OF CITATIONS
SEARCH DETAIL
...