Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 7 de 7
Filtrar
1.
BMC Med Inform Decis Mak ; 23(1): 207, 2023 10 09.
Artigo em Inglês | MEDLINE | ID: mdl-37814311

RESUMO

BACKGROUND: There are many Machine Learning (ML) models which predict acute kidney injury (AKI) for hospitalised patients. While a primary goal of these models is to support clinical decision-making, the adoption of inconsistent methods of estimating baseline serum creatinine (sCr) may result in a poor understanding of these models' effectiveness in clinical practice. Until now, the performance of such models with different baselines has not been compared on a single dataset. Additionally, AKI prediction models are known to have a high rate of false positive (FP) events regardless of baseline methods. This warrants further exploration of FP events to provide insight into potential underlying reasons. OBJECTIVE: The first aim of this study was to assess the variance in performance of ML models using three methods of baseline sCr on a retrospective dataset. The second aim was to conduct an error analysis to gain insight into the underlying factors contributing to FP events. MATERIALS AND METHODS: The Intensive Care Unit (ICU) patients of the Medical Information Mart for Intensive Care (MIMIC)-IV dataset was used with the KDIGO (Kidney Disease Improving Global Outcome) definition to identify AKI episodes. Three different methods of estimating baseline sCr were defined as (1) the minimum sCr, (2) the Modification of Diet in Renal Disease (MDRD) equation and the minimum sCr and (3) the MDRD equation and the mean of preadmission sCr. For the first aim of this study, a suite of ML models was developed for each baseline and the performance of the models was assessed. An analysis of variance was performed to assess the significant difference between eXtreme Gradient Boosting (XGB) models across all baselines. To address the second aim, Explainable AI (XAI) methods were used to analyse the XGB errors with Baseline 3. RESULTS: Regarding the first aim, we observed variances in discriminative metrics and calibration errors of ML models when different baseline methods were adopted. Using Baseline 1 resulted in a 14% reduction in the f1 score for both Baseline 2 and Baseline 3. There was no significant difference observed in the results between Baseline 2 and Baseline 3. For the second aim, the FP cohort was analysed using the XAI methods which led to relabelling data with the mean of sCr in 180 to 0 days pre-ICU as the preferred sCr baseline method. The XGB model using this relabelled data achieved an AUC of 0.85, recall of 0.63, precision of 0.54 and f1 score of 0.58. The cohort size was 31,586 admissions, of which 5,473 (17.32%) had AKI. CONCLUSION: In the absence of a widely accepted method of baseline sCr, AKI prediction studies need to consider the impact of different baseline methods on the effectiveness of ML models and their potential implications in real-world implementations. The utilisation of XAI methods can be effective in providing insight into the occurrence of prediction errors. This can potentially augment the success rate of ML implementation in routine care.


Assuntos
Injúria Renal Aguda , Modelos Estatísticos , Humanos , Creatinina , Estudos Retrospectivos , Prognóstico
2.
Ochsner J ; 23(3): 222-231, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-37711478

RESUMO

Background: Sepsis is the leading cause of mortality among hospitalized patients in our health care system and has been the target of major international initiatives such as the Surviving Sepsis Campaign championed by the Society of Critical Care Medicine and Get Ahead of Sepsis led by the Centers for Disease Control and Prevention. Methods: Our institution has strived to improve outcomes for patients by implementing a novel suite of integrated clinical decision support tools driven by a predictive learning algorithm in the electronic health record. The tools focus on sepsis multidisciplinary care using industry-standard heuristics of interface design to enhance usability and interaction. Results: Our novel clinical decision support tools demonstrated a higher level of interaction with a higher alert-to-action ratio compared to the average of all best practice alerts used at Ochsner Health (16.46% vs 8.4% to 12.1%). Conclusion: By using intuitive design strategies that encouraged users to complete best practice alerts and team-wide visualization of clinical decisions via a checklist, our clinical decision support tools for the detection and management of sepsis represent an improvement over legacy tools, and the results of this pilot may have implications beyond sepsis alerting.

3.
Int J Med Inform ; 162: 104758, 2022 Apr 02.
Artigo em Inglês | MEDLINE | ID: mdl-35398812

RESUMO

BACKGROUND: Machine learning (ML) is a subset of Artificial Intelligence (AI) that is used to predict and potentially prevent adverse patient outcomes. There is increasing interest in the application of these models in digital hospitals to improve clinical decision-making and chronic disease management, particularly for patients with diabetes. The potential of ML models using electronic medical records (EMR) to improve the clinical care of hospitalised patients with diabetes is currently unknown. OBJECTIVE: The aim was to systematically identify and critically review the published literature examining the development and validation of ML models using EMR data for improving the care of hospitalised adult patients with diabetes. METHODS: The Preferred Reporting Items for Systematic Reviews and Meta Analyses (PRISMA) guidelines were followed. Four databases were searched (Embase, PubMed, IEEE and Web of Science) for studies published between January 2010 to January 2022. The reference lists of the eligible articles were manually searched. Articles that examined adults and both developed and validated ML models using EMR data were included. Studies conducted in primary care and community care settings were excluded. Studies were independently screened and data was extracted using Covidence® systematic review software. For data extraction and critical appraisal, the Checklist for Critical Appraisal and Data Extraction for Systematic Reviews of Prediction Modelling Studies (CHARMS) was followed. Risk of bias was assessed using the Prediction model Risk Of Bias Assessment Tool (PROBAST). Quality of reporting was assessed by adherence to the Transparent Reporting of a multivariable prediction model for Individual Prognosis Or Diagnosis (TRIPOD) guideline. The IJMEDI checklist was followed to assess quality of ML models and the reproducibility of their outcomes. The external validation methodology of the studies was appraised. RESULTS: Of the 1317 studies screened, twelve met inclusion criteria. Eight studies developed ML models to predict disglycaemic episodes for hospitalized patients with diabetes, one study developed a ML model to predict total insulin dosage, two studies predicted risk of readmission, and one study improved the prediction of hospital readmission for inpatients with diabetes. All included studies were heterogeneous with regard to ML types, cohort, input predictors, sample size, performance and validation metrics and clinical outcomes. Two studies adhered to the TRIPOD guideline. The methodological reporting of all the studies was evaluated to be at high risk of bias. The quality of ML models in all studies was assessed as poor. Robust external validation was not performed on any of the studies. No models were implemented or evaluated in routine clinical care. CONCLUSIONS: This review identified a limited number of ML models which were developed to improve inpatient management of diabetes. No ML models were implemented in real hospital settings. Future research needs to enhance the development, reporting and validation steps to enable ML models for integration into routine clinical care.

4.
Mhealth ; 7: 15, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-33634198

RESUMO

BACKGROUND: It is imperative that coordinated and systematic action is undertaken, at all levels, to minimize the consequences of the growing global burden of non-communicable diseases (NCDs). An integrated multi-disciplinary primary care-based preventive program has the potential to reduce lifestyle-related risk factors contributing to NCDs. Accredited Social Health Activists (ASHAs), who are community health workers (CHWs), may be employed to screen populations for NCDs in rural India. To enable ASHAs to be supported when they are on their own in the community, we have developed a clinical decision support system (CDSS) "Arogya Sahyog" (a Hindi term meaning 'health assistant') to guide them through the process. Herein, we describe the protocol for testing this CDSS and the associated community-based management program for people with NCDs. METHODS: This mixed-method study involving both qualitative and quantitative approaches will be conducted in two phases to test: (I) feasibility of the CDSS itself, and (II) feasibility of utilizing the app to develop capacity within the ASHA workforce. First, we will use a semi-structured questionnaire to determine details about the acceptance of using the app, satisfaction with the CDSS, perceived barriers, ideas for improvement, and willingness to use the CDSS. We will also test the usability of this CDSS for the identification of people with hypertension, with or without co-morbidities, by ASHAs and their supervisors. The CDSS will be installed on a tablet and is designed to help ASHAs to screen, provide lifestyle advice, and refer critical patients to primary care physicians. Second, to develop capacity within the ASHA workforce, ASHAs will be taught about NCDs, so they can motivate people to adhere to healthy activities and self-manage their NCDs. We will also test whether this training program improves ASHAs' knowledge about NCDs. We will further evaluate ASHAs' capacity to provide health promotional interventions to patients with, or at risk of, NCDs using the tablet device. DISCUSSION: The study will enable us to test a CDSS and an educational training program. Specifically, we will test whether the program is user-friendly, easy-to-comprehend, easy-to-deliver, workflow-oriented, and comprehensive. We will determine whether mobilizing this ASHA workforce with the support of a CDSS could result in better management of hypertension and co-morbidities than usual care.

5.
São Paulo; s.n; 2020. 135 p
Tese em Português | LILACS, BDENF - Enfermagem | ID: biblio-1398665

RESUMO

Introdução: A maioria dos países do mundo está implantando o Registro Eletrônico em Saúde como uma das iniciativas mais importantes em sua política de assistência à saúde, na perspectiva de obter atendimento seguro e de qualidade. No entanto, problemas de usabilidade podem afetar a eficácia, a eficiência e a satisfação do usuário. Objetivo: avaliar a usabilidade de um Sistema de Apoio à Decisão Clínica para documentar o Processo de Enfermagem. Método: estudo quantitativo, desenvolvido em três etapas. A primeira etapa configurou o delineamento quase- experimental, do tipo antes de depois, comparando a qualidade de 81 registros de enfermagem da Versão I do sistema (pré-intervenção) com 58 registros da versão II (pós-intervenção). O instrumento utilizado foi o Quality of Diagnoses, Interventions and Outcomes Q-DIO Versão Brasileira, que possui quatro domínios e escore máximo de 58 pontos. As intervenções consistiram em planejamento e implantação piloto da versão II do sistema, treinamento e acompanhamento dos usuários. A segunda etapa correspondeu à avaliação da eficiência por meio da mensuração do tempo despendido pelo enfermeiro para documentar a Avaliação de Enfermagem e à correlação com os itens da avaliação. Na terceira etapa, foi medida a satisfação dos usuários na utilização da versão II do sistema por meio do questionário Software Usability Measurement Inventory SUMI, cujas escalas possuem escores padronizados com referência a uma média populacional de 50 pontos. Os escores das escalas SUMI foram computados e analisados no software SUMISCO. Os demais dados foram analisados no software R, utilizando estatística descritiva e inferencial. A coleta de dados das três etapas ocorreu entre janeiro de 2019 e janeiro de 2020. Resultados: A média obtida na aplicação do Q-DIO foi 38,24 pontos na versão I e 46,35 pontos na versão II. Houve evidências de diferença estatística entre as médias dos grupos pré e pós-intervenção (valor-p menor que 0,001) e diminuição dos itens não documentados nos quatro domínios avaliados. O tempo médio despendido pelo enfermeiro para documentar a Avaliação de Enfermagem foi de 12,5 minutos, desvio padrão de 11,2 minutos e mediana de 8,9 minutos. A média dos enfermeiros nas escalas SUMI foram: Eficiência 59,58, Satisfação 56,83, Utilidade 55,92, Controle 44,80, Aprendizagem 55,75 e Usabilidade Global 56,00. A média dos técnicos de enfermagem foram: Eficiência 60,42, Satisfação 62,58, Utilidade 60,84, Controle 54,47, Aprendizagem 65,79 e Usabilidade Global 60,68. Conclusões: a qualidade da documentação na versão II do sistema foi superior à versão I. A eficácia do sistema para documentar o Processo de Enfermagem e a efetividade das intervenções foram comprovadas. A avaliação da eficiência identificou o tempo despendido para documentar a Avaliação de Enfermagem. A média na maioria das escalas SUMI ficaram acima do banco de dados de referência internacional, exceto na escala Controle que ficou abaixo do valor médio na avaliação dos enfermeiros. Apontados problemas de usabilidade que podem impactar negativamente na experiência do usuário. Este estudo contribui para a prática clínica, auditoria de qualidade da documentação, visibilidade da enfermagem enquanto ciência do cuidado, desenvolvimento e implementação de sistemas de apoio à decisão clínica funcionais, interativos e amigáveis.


Introduction: most countries in the world have been implementing the Electronic Health Record as one of the most important initiatives of their health care policy, with a view to obtaining safe and quality care. However, usability issues can affect the effectiveness, efficiency and users satisfaction. Objectives: assessing the usability of a Clinical Decision Support System used for Nursing Process documentation purposes. Method: quantitative study developed in three stages. The first stage comprised the adoption of a quasi-experimental design (of the before and after type) comparing the quality of 81 nursing records of Version I of the system (pre-intervention) with 58 records of version II (post-intervention). The instrument used was the Quality of Diagnoses, Interventions and Outcomes - Q-DIO Brazilian Version, which has four domains and a maximum score of 58 points. The interventions consisted of planning and pilot implementing the system version II, users training and monitoring. The second stage comprised efficiency analysis based on the time spent by nurse to document the Nursing Assessment and the correlation with the assessment items. The third stage referred to users satisfaction assessment based on the Software Usability Measurement Inventory - SUMI questionnaire, whose scales have standardized scores with reference to a population average of 50 points. The scores of the SUMI scales were computed and analyzed using the SUMISCO software. The other data were analyzed using R software, and descriptive and inferential statistics. Data collection for the three stages took place between January 2019 and January 2020. Results: The average Q-DIO scores reached 38.24 points (in version I) and 46.35 points (in version II). There was evidence of statistical difference between the means of the pre and post- intervention groups (p-value less than 0.001) and a decrease in the undocumented items in the four domains evaluated. The mean time spent by nurse to document the assessment in the system was 12.5 minutes (standard deviation = 11.2 minutes and median = 8.9 minutes). Nurses recorded the following averages: Global Usability (56.00), Efficiency (59.58), Affect (56.83), Helpfulness (55.92), Control (44.80) and Learnability (55.75). Nursing technicians recorded the following averages: Global Usability (60.68), Efficiency (60.42), Affect (62.58), Helpfulness (60.84), Control (54.47) and Learnability (65.79). Conclusion: the quality of the documentation in version II of the system was superior to version I. The effective of the system to document the Nursing Process and the effectiveness of interventions were proven. The efficiency analysis identified the time spent to document the Nursing Assessment. The average recorded in most SUMI scales was higher than that of the international reference database, except for the Control scale, which recorded below-the-average value in the nurses evaluation. Usability issues pointed out may negatively affect the user experience. This study contributes to clinical practice, documentation quality audit, visibility of nursing as a science of care, development and implementation of functional, interactive and friendly support systems for clinical decision.


Assuntos
Avaliação da Tecnologia Biomédica , Sistemas de Apoio a Decisões Clínicas , Processo de Enfermagem , Informática em Enfermagem , Registros Eletrônicos de Saúde , Terminologia Padronizada em Enfermagem
6.
Ochsner J ; 18(1): 30-35, 2018.
Artigo em Inglês | MEDLINE | ID: mdl-29559866

RESUMO

BACKGROUND: Opioid prescription drug abuse is a major public health concern. Healthcare provider prescribing patterns, especially among non-pain management specialists, are a major factor. Practice guidelines recommend what to do for safe opioid prescribing but do not provide guidance on how to implement best practices. METHODS: We describe the implementation of electronic medical record clinical decision support (EMR CDS) for opioid management of chronic noncancer pain in an integrated delivery system. This prospective cohort study will examine relationships between primary care physician compliance with EMR CDS-guided care (vs usual care), delivery of guideline-concordant care, and changes in the morphine equivalent of prescribed opioids. We report baseline characteristics of patients receiving chronic opioid therapy and organizational prescribing trends. RESULTS: Between August and October 2016, we identified 2,759 primary care patients who received chronic opioid therapy. Of these patients, approximately 71% had chronic noncancer pain, and 62% had diagnoses of depression/anxiety. Six of 36 primary care clinics each had >100 patients receiving chronic opioid therapy. When the EMR CDS launched in October 2017, we identified 54,200 patients who had received opioid therapy for at least 14 days from various specialty and primary care providers during the prior 24 months. Of these patients, 36% had a benzodiazepine coprescription, and 13% had substance abuse diagnoses. CONCLUSION: Health system research that examines workflow-focused strategies to improve physician knowledge and skills for safely managing opioid therapy is needed. If EMR CDS proves to be effective in increasing adherence to practice guidelines, this EMR strategy can potentially be replicated and scaled up nationwide to improve population health management.

7.
J Am Med Inform Assoc ; 20(e2): e306-10, 2013 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-23956016

RESUMO

BACKGROUND AND AIM: Celiac disease (CD) is a lifelong immune-mediated disease with excess mortality. Early diagnosis is important to minimize disease symptoms, complications, and consumption of healthcare resources. Most patients remain undiagnosed. We developed two electronic medical record (EMR)-based algorithms to identify patients at high risk of CD and in need of CD screening. METHODS: (I) Using natural language processing (NLP), we searched EMRs for 16 free text (and related) terms in 216 CD patients and 280 controls. (II) EMRs were also searched for ICD9 (International Classification of Disease) codes suggesting an increased risk of CD in 202 patients with CD and 524 controls. For each approach, we determined the optimal number of hits to be assigned as CD cases. To assess performance of these algorithms, sensitivity and specificity were calculated. RESULTS: Using two hits as the cut-off, the NLP algorithm identified 72.9% of all celiac patients (sensitivity), and ruled out CD in 89.9% of the controls (specificity). In a representative US population of individuals without a prior celiac diagnosis (assuming that 0.6% had undiagnosed CD), this NLP algorithm could identify a group of individuals where 4.2% would have CD (positive predictive value). ICD9 code search using three hits as the cut-off had a sensitivity of 17.1% and a specificity of 88.5% (positive predictive value was 0.9%). DISCUSSION AND CONCLUSIONS: This study shows that computerized EMR-based algorithms can help identify patients at high risk of CD. NLP-based techniques demonstrate higher sensitivity and positive predictive values than algorithms based on ICD9 code searches.


Assuntos
Algoritmos , Doença Celíaca/diagnóstico , Registros Eletrônicos de Saúde , Processamento de Linguagem Natural , Adulto , Distribuição por Idade , Criança , Feminino , Humanos , Classificação Internacional de Doenças , Masculino , Fenótipo , Risco , Distribuição por Sexo
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...