Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 82
Filter
1.
J Clin Transl Sci ; 8(1): e92, 2024.
Article in English | MEDLINE | ID: mdl-38836249

ABSTRACT

The Stanford Population Health Sciences Data Ecosystem was created to facilitate the use of large datasets containing health records from hundreds of millions of individuals. This necessitated technical solutions optimized for an academic medical center to manage and share high-risk data at scale. Through collaboration with internal and external partners, we have built a Data Ecosystem to host, curate, and share data with hundreds of users in a secure and compliant manner. This platform has enabled us to host unique data assets and serve the needs of researchers across Stanford University, and the technology and approach were designed to be replicable and portable to other institutions. We have found, however, that though these technological advances are necessary, they are not sufficient. Challenges around making data Findable, Accessible, Interoperable, and Reusable remain. Our experience has demonstrated that there is a high demand for access to real-world data, and that if the appropriate tools and structures are in place, translational research can be advanced considerably. Together, technological solutions, management structures, and education to support researcher, data science, and community collaborations offer more impactful processes over the long-term for supporting translational research with real-world data.

2.
J Clin Hypertens (Greenwich) ; 26(7): 797-805, 2024 Jul.
Article in English | MEDLINE | ID: mdl-38850400

ABSTRACT

Hypertension disparities persist and remain high among racial and ethnic minority populations in the United States (US). Data-driven approaches based on electronic health records (EHRs) in primary care are seen as a strong opportunity to address this situation. This qualitative study evaluated the development, sustainability, and usability of an EHR-integrated hypertension disparities dashboard for health care professionals in primary care. Ten semi-structured interviews, exploring the approach and sustainability, as well as eight usability interviews, using the think aloud protocol were conducted with quality improvement managers, data analysts, program managers, evaluators, and primary care providers. For the results, dashboard development steps include having clear goals, defining a target audience, compiling data, and building multidisciplinary teams. For sustainability, the dashboard can enhance understanding of the social determinants of health or to inform QI projects. In terms of dashboard usability, positive aspects consisted of the inclusion of summary pages, patient's detail pages, and hover-over interface. Important design considerations were refining sorting functions, gender inclusivity, and increasing dashboard visibility. In sum, an EHR-driven dashboard can be a novel tool for addressing hypertension disparities in primary care. It offers a platform where clinicians can identify patients for culturally tailored interventions. Factors such as physician time constraints, data definitions, comprehensive patient demographic information, end-users, and future sustenance, should be considered before implementing a dashboard. Additional research is needed to identify practices for integrating a dashboard into clinical workflow for hypertension.


Subject(s)
Electronic Health Records , Hypertension , Primary Health Care , Qualitative Research , Humans , Primary Health Care/organization & administration , Hypertension/therapy , Hypertension/ethnology , Male , Female , United States/epidemiology , Quality Improvement , Healthcare Disparities , Middle Aged , Adult , Interviews as Topic , Ethnicity
3.
Addiction ; 2024 Jun 24.
Article in English | MEDLINE | ID: mdl-38923168

ABSTRACT

BACKGROUND AND AIMS: Opioid use disorder (OUD) and opioid dependence lead to significant morbidity and mortality, yet treatment retention, crucial for the effectiveness of medications like buprenorphine-naloxone, remains unpredictable. Our objective was to determine the predictability of 6-month retention in buprenorphine-naloxone treatment using electronic health record (EHR) data from diverse clinical settings and to identify key predictors. DESIGN: This retrospective observational study developed and validated machine learning-based clinical risk prediction models using EHR data. SETTING AND CASES: Data were sourced from Stanford University's healthcare system and Holmusk's NeuroBlu database, reflecting a wide range of healthcare settings. The study analyzed 1800 Stanford and 7957 NeuroBlu treatment encounters from 2008 to 2023 and from 2003 to 2023, respectively. MEASUREMENTS: Predict continuous prescription of buprenorphine-naloxone for at least 6 months, without a gap of more than 30 days. The performance of machine learning prediction models was assessed by area under receiver operating characteristic (ROC-AUC) analysis as well as precision, recall and calibration. To further validate our approach's clinical applicability, we conducted two secondary analyses: a time-to-event analysis on a single site to estimate the duration of buprenorphine-naloxone treatment continuity evaluated by the C-index and a comparative evaluation against predictions made by three human clinical experts. FINDINGS: Attrition rates at 6 months were 58% (NeuroBlu) and 61% (Stanford). Prediction models trained and internally validated on NeuroBlu data achieved ROC-AUCs up to 75.8 (95% confidence interval [CI] = 73.6-78.0). Addiction medicine specialists' predictions show a ROC-AUC of 67.8 (95% CI = 50.4-85.2). Time-to-event analysis on Stanford data indicated a median treatment retention time of 65 days, with random survival forest model achieving an average C-index of 65.9. The top predictor of treatment retention identified included the diagnosis of opioid dependence. CONCLUSIONS: US patients with opioid use disorder or opioid dependence treated with buprenorphine-naloxone prescriptions appear to have a high (∼60%) treatment attrition by 6 months. Machine learning models trained on diverse electronic health record datasets appear to be able to predict treatment continuity with accuracy comparable to that of clinical experts.

4.
J Am Med Inform Assoc ; 31(7): 1540-1550, 2024 Jun 20.
Article in English | MEDLINE | ID: mdl-38804963

ABSTRACT

OBJECTIVE: Predicting mortality after acute myocardial infarction (AMI) is crucial for timely prescription and treatment of AMI patients, but there are no appropriate AI systems for clinicians. Our primary goal is to develop a reliable and interpretable AI system and provide some valuable insights regarding short, and long-term mortality. MATERIALS AND METHODS: We propose the RIAS framework, an end-to-end framework that is designed with reliability and interpretability at its core and automatically optimizes the given model. Using RIAS, clinicians get accurate and reliable predictions which can be used as likelihood, with global and local explanations, and "what if" scenarios to achieve desired outcomes as well. RESULTS: We apply RIAS to AMI prognosis prediction data which comes from the Korean Acute Myocardial Infarction Registry. We compared FT-Transformer with XGBoost and MLP and found that FT-Transformer has superiority in sensitivity and comparable performance in AUROC and F1 score to XGBoost. Furthermore, RIAS reveals the significance of statin-based medications, beta-blockers, and age on mortality regardless of time period. Lastly, we showcase reliable and interpretable results of RIAS with local explanations and counterfactual examples for several realistic scenarios. DISCUSSION: RIAS addresses the "black-box" issue in AI by providing both global and local explanations based on SHAP values and reliable predictions, interpretable as actual likelihoods. The system's "what if" counterfactual explanations enable clinicians to simulate patient-specific scenarios under various conditions, enhancing its practical utility. CONCLUSION: The proposed framework provides reliable and interpretable predictions along with counterfactual examples.


Subject(s)
Artificial Intelligence , Myocardial Infarction , Humans , Myocardial Infarction/mortality , Myocardial Infarction/diagnosis , Prognosis , Male , Registries , Female , Republic of Korea , Reproducibility of Results , Aged , Middle Aged
5.
medRxiv ; 2024 May 27.
Article in English | MEDLINE | ID: mdl-38585743

ABSTRACT

Background: Electronic health records (EHR) are increasingly used for studying multimorbidities. However, concerns about accuracy, completeness, and EHRs being primarily designed for billing and administrative purposes raise questions about the consistency and reproducibility of EHR-based multimorbidity research. Methods: Utilizing phecodes to represent the disease phenome, we analyzed pairwise comorbidity strengths using a dual logistic regression approach and constructed multimorbidity as an undirected weighted graph. We assessed the consistency of the multimorbidity networks within and between two major EHR systems at local (nodes and edges), meso (neighboring patterns), and global (network statistics) scales. We present case studies to identify disease clusters and uncover clinically interpretable disease relationships. We provide an interactive web tool and a knowledge base combining data from multiple sources for online multimorbidity analysis. Findings: Analyzing data from 500,000 patients across Vanderbilt University Medical Center and Mass General Brigham health systems, we observed a strong correlation in disease frequencies (Kendall's τ = 0.643) and comorbidity strengths (Pearson ρ = 0.79). Consistent network statistics across EHRs suggest similar structures of multimorbidity networks at various scales. Comorbidity strengths and similarities of multimorbidity connection patterns align with the disease genetic correlations. Graph-theoretic analyses revealed a consistent core-periphery structure, implying efficient network clustering through threshold graph construction. Using hydronephrosis as a case study, we demonstrated the network's ability to uncover clinically relevant disease relationships and provide novel insights. Interpretation: Our findings demonstrate the robustness of large-scale EHR data for studying phenome-wide multimorbidities. The alignment of multimorbidity patterns with genetic data suggests the potential utility for uncovering shared biology of diseases. The consistent core-periphery structure offers analytical insights to discover complex disease interactions. This work also sets the stage for advanced disease modeling, with implications for precision medicine. Funding: VUMC Biostatistics Development Award, the National Institutes of Health, and the VA CSRD.

6.
Technol Health Care ; 32(4): 2711-2731, 2024.
Article in English | MEDLINE | ID: mdl-38607777

ABSTRACT

BACKGROUND: In recent times, there has been widespread deployment of Internet of Things (IoT) applications, particularly in the healthcare sector, where computations involving user-specific data are carried out on cloud servers. However, the network nodes in IoT healthcare are vulnerable to an increased level of security threats. OBJECTIVE: This paper introduces a secure Electronic Health Record (EHR) framework with a focus on IoT. METHODS: Initially, the IoT sensor nodes are designated as registered patients and undergo initialization. Subsequently, a trust evaluation is conducted, and the clustering of trusted nodes is achieved through the application of Tasmanian Devil Optimization (STD-TDO) utilizing the Student's T-Distribution. Utilizing the Transposition Cipher-Squared random number generator-based-Elliptic Curve Cryptography (TCS-ECC), the clustered nodes encrypt four types of sensed patient data. The resulting encrypted data undergoes hashing and is subsequently added to the blockchain. This configuration functions as a network, actively monitored to detect any external attacks. To accomplish this, a feature reputation score is calculated for the network's features. This score is then input into the Swish Beta activated-Recurrent Neural Network (SB-RNN) model to classify potential attacks. The latest transactions on the blockchain are scrutinized using the Neutrosophic Vague Set Fuzzy (NVS-Fu) algorithm to identify any double-spending attacks on non-compromised nodes. Finally, genuine nodes are granted permission to decrypt medical records. RESULTS: In the experimental analysis, the performance of the proposed methods was compared to existing models. The results demonstrated that the suggested approach significantly increased the security level to 98%, reduced attack detection time to 1300 ms, and maximized accuracy to 98%. Furthermore, a comprehensive comparative analysis affirmed the reliability of the proposed model across all metrics. CONCLUSION: The proposed healthcare framework's efficiency is proved by the experimental evaluation.


Subject(s)
Blockchain , Computer Security , Electronic Health Records , Internet of Things , Neural Networks, Computer , Humans , Electronic Health Records/organization & administration , Algorithms
7.
J Nephrol ; 2024 Apr 02.
Article in English | MEDLINE | ID: mdl-38564072

ABSTRACT

BACKGROUND: There is limited evidence to support definite clinical outcomes of direct oral anticoagulant (DOAC) therapy in chronic kidney disease (CKD). By identifying the important variables associated with clinical outcomes following DOAC administration in patients in different stages of CKD, this study aims to assess this evidence gap. METHODS: An anonymised dataset comprising 97,413 patients receiving DOAC therapy in a tertiary health setting was systematically extracted from the multidimensional electronic health records and prepared for analysis. Machine learning classifiers were applied to the prepared dataset to select the important features which informed covariate selection in multivariate logistic regression analysis. RESULTS: For both CKD and non-CKD DOAC users, features such as length of stay, treatment days, and age were ranked highest for relevance to adverse outcomes like death and stroke. Patients with Stage 3a CKD had significantly higher odds of ischaemic stroke (OR 2.45, 95% Cl: 2.10-2.86; p = 0.001) and lower odds of all-cause mortality (OR 0.87, 95% Cl: 0.79-0.95; p = 0.001) on apixaban therapy. In patients with CKD (Stage 5) receiving apixaban, the odds of death were significantly lowered (OR 0.28, 95% Cl: 0.14-0.58; p = 0.001), while the effect on ischaemic stroke was insignificant. CONCLUSIONS: A positive effect of DOAC therapy was observed in advanced CKD. Key factors influencing clinical outcomes following DOAC administration in patients in different stages of CKD were identified. These are crucial for designing more advanced studies to explore safer and more effective DOAC therapy for the population.

8.
Article in English | MEDLINE | ID: mdl-38482076

ABSTRACT

Background: Fecal occult blood tests (FOBT) are inappropriately used in patients with melena, hematochezia, coffee ground emesis, iron deficiency anemia, and diarrhea. The use of FOBT for reasons other than screening for colorectal cancer is considered low-value and unnecessary. Methods: Quality Improvement Project that utilized education, Best Practice Advisory (BPA) and modification of order sets in the electronic health record (EHR). The interventions were done in a sequential order based on the Plan-Do-Study-Act (PDSA) method. An annotated run chart was used to analyze the collected data. Results: Education and Best Practice Advisory within the EHR led to significant reduction in the use of FOBT in the ED. The interventions eventually led to a consensus and removal of FOBT from the order set of the EHR for patients in the ED and hospital units. Conclusions: The use of electronic BPA, education and modification of order sets in the EHR can be effective at de-implementing unnecessary tests and procedures like FOBT in the ED and hospital units.

9.
Ann Fam Med ; 22(1): 12-18, 2024.
Article in English | MEDLINE | ID: mdl-38253499

ABSTRACT

PURPOSE: The purpose of this study is to evaluate recent trends in primary care physician (PCP) electronic health record (EHR) workload. METHODS: This longitudinal study observed the EHR use of 141 academic PCPs over 4 years (May 2019 to March 2023). Ambulatory full-time equivalency (aFTE), visit volume, and panel size were evaluated. Electronic health record time and inbox message volume were measured per 8 hours of scheduled clinic appointments. RESULTS: From the pre-COVID-19 pandemic year (May 2019 to February 2020) to the most recent study year (April 2022 to March 2023), the average time PCPs spent in the EHR per 8 hours of scheduled clinic appointments increased (+28.4 minutes, 7.8%), as did time in orders (+23.1 minutes, 58.9%), inbox (+14.0 minutes, 24.4%), chart review (+7.2 minutes, 13.0%), notes (+2.9 minutes, 2.3%), outside scheduled hours on days with scheduled appointments (+6.4 minutes, 8.2%), and on unscheduled days (+13.6 minutes, 19.9%). Primary care physicians received more patient medical advice requests (+5.4 messages, 55.5%) and prescription messages (+2.3, 19.5%) per 8 hours of scheduled clinic appointments, but fewer patient calls (-2.8, -10.5%) and results messages (-0.3, -2.7%). While total time in the EHR continued to increase in the final study year (+7.7 minutes, 2.0%), inbox time decreased slightly from the year prior (-2.2 minutes, -3.0%). Primary care physicians' average aFTE decreased 5.2% from 0.66 to 0.63 over 4 years. CONCLUSIONS: Primary care physicians' time in the EHR continues to grow. While PCPs' inbox time may be stabilizing, it is still substantially higher than pre-pandemic levels. It is imperative health systems develop strategies to change the EHR workload trajectory to minimize PCPs' occupational stress and mitigate unnecessary reductions in effective physician workforce resulting from the increased EHR burden.


Subject(s)
Electronic Health Records , Physicians, Primary Care , Humans , Longitudinal Studies , Pandemics , Workload
10.
Cancer ; 130(1): 60-67, 2024 01 01.
Article in English | MEDLINE | ID: mdl-37851512

ABSTRACT

BACKGROUND: A lack of onsite clinical trials is the largest barrier to participation of cancer patients in trials. Development of an automated process for regional trial eligibility screening first requires identification of patient electronic health record data that allows effective trial screening, and evidence that searching for trials regionally has a positive impact compared with site-specific searching. METHODS: To assess a screening framework that would support an automated regional search tool, a set of patient clinical variables was analyzed for prescreening clinical trials. The variables were used to assess regional compared with site-specific screening throughout the United States. RESULTS: Eight core variables from patient electronic health records were identified that yielded likely matches in a prescreen process. Assessment of the screening framework was performed using these variables to search for trials locally and regionally for an 84-patient cohort. The likelihood that a trial returned in this prescreen was a provisional trial match was 45.7%. Expanding the search radius to 20 miles led to a net 91% increase in matches across cancers within the tested cohort. In a U.S. regional analysis, for sparsely populated areas, searching a 100-mile radius using the prescreening framework was needed, whereas for urban areas a 20-mile radius was sufficient. CONCLUSION: A clinical trial screening framework was assessed that uses limited patient data to efficiently and effectively identify prescreen matches for clinical trials. This framework improves trial matching rates when searching regionally compared with locally, although the applicability of this framework may vary geographically depending on oncology practice density. PLAIN LANGUAGE SUMMARY: Clinical trials provide cancer patients the opportunity to participate in research and development of new drugs and treatment approaches. It can be difficult to find available clinical trials for which a patient is eligible. This article describes an approach to clinical trial matching using limited patient data to search for trials regionally, beyond just the patient's local care site. Feasibility testing shows that this process can lead to a net 91% increase in the number of potential clinical trial matches available within 20 miles of a patient. Based on these findings, a software tool based on this model is being developed that will automatically send limited, deidentified information from patient medical records to services that can identify possible clinical trials within a given region.


Subject(s)
Neoplasms , Humans , Electronic Health Records , Eligibility Determination , Feasibility Studies , Neoplasms/diagnosis , Neoplasms/therapy , Patient Selection , Clinical Trials as Topic
11.
Bioengineering (Basel) ; 10(11)2023 Nov 10.
Article in English | MEDLINE | ID: mdl-38002431

ABSTRACT

BACKGROUND: Although electronic health records (EHR) provide useful insights into disease patterns and patient treatment optimisation, their reliance on unstructured data presents a difficulty. Echocardiography reports, which provide extensive pathology information for cardiovascular patients, are particularly challenging to extract and analyse, because of their narrative structure. Although natural language processing (NLP) has been utilised successfully in a variety of medical fields, it is not commonly used in echocardiography analysis. OBJECTIVES: To develop an NLP-based approach for extracting and categorising data from echocardiography reports by accurately converting continuous (e.g., LVOT VTI, AV VTI and TR Vmax) and discrete (e.g., regurgitation severity) outcomes in a semi-structured narrative format into a structured and categorised format, allowing for future research or clinical use. METHODS: 135,062 Trans-Thoracic Echocardiogram (TTE) reports were derived from 146967 baseline echocardiogram reports and split into three cohorts: Training and Validation (n = 1075), Test Dataset (n = 98) and Application Dataset (n = 133,889). The NLP system was developed and was iteratively refined using medical expert knowledge. The system was used to curate a moderate-fidelity database from extractions of 133,889 reports. A hold-out validation set of 98 reports was blindly annotated and extracted by two clinicians for comparison with the NLP extraction. Agreement, discrimination, accuracy and calibration of outcome measure extractions were evaluated. RESULTS: Continuous outcomes including LVOT VTI, AV VTI and TR Vmax exhibited perfect inter-rater reliability using intra-class correlation scores (ICC = 1.00, p < 0.05) alongside high R2 values, demonstrating an ideal alignment between the NLP system and clinicians. A good level (ICC = 0.75-0.9, p < 0.05) of inter-rater reliability was observed for outcomes such as LVOT Diam, Lateral MAPSE, Peak E Velocity, Lateral E' Velocity, PV Vmax, Sinuses of Valsalva and Ascending Aorta diameters. Furthermore, the accuracy rate for discrete outcome measures was 91.38% in the confusion matrix analysis, indicating effective performance. CONCLUSIONS: The NLP-based technique yielded good results when it came to extracting and categorising data from echocardiography reports. The system demonstrated a high degree of agreement and concordance with clinician extractions. This study contributes to the effective use of semi-structured data by providing a useful tool for converting semi-structured text to a structured echo report that can be used for data management. Additional validation and implementation in healthcare settings can improve data availability and support research and clinical decision-making.

12.
Front Digit Health ; 5: 1275711, 2023.
Article in English | MEDLINE | ID: mdl-38034906

ABSTRACT

Objectives: The development of a standardized technical framework for exchanging electronic health records is widely recognized as a challenging endeavor that necessitates appropriate technological, semantic, organizational, and legal interventions to support the continuity of health and care. In this context, this study delineates a pan-European hackathon aimed at evaluating the efforts undertaken by member states of the European Union to develop a European electronic health record exchange format. This format is intended to facilitate secure cross-border healthcare and optimize service delivery to citizens, paving the way toward a unified European health data space. Methods: The hackathon was conducted within the scope of the X-eHealth project. Interested parties were initially presented with a representative clinical scenario and a set of specifications pertaining to the European electronic health record exchange format, encompassing Laboratory Results Reports, Medical Imaging and Reports, and Hospital Discharge Reports. In addition, five onboarding webinars and two professional training events were organized to support the participating entities. To ensure a minimum acceptable quality threshold, a set of inclusion criteria for participants was outlined for the interested teams. Results: Eight teams participated in the hackathon, showcasing state-of-the-art applications. These teams utilized technologies such as Health Level Seven-Fast Healthcare Interoperability Resources (HL7 FHIR) and Clinical Document Architecture (CDA), alongside pertinent IHE integration profiles. They demonstrated a range of complementary uses and practices, contributing substantial inputs toward the development of future-proof electronic health record management systems. Conclusions: The execution of the hackathon demonstrated the efficacy of such approaches in uniting teams from diverse backgrounds to develop state-of-the-art applications. The outcomes produced by the event serve as proof-of-concept demonstrators for managing and preventing chronic diseases, delivering value to citizens, companies, and the research community.

13.
J Biomed Inform ; 147: 104522, 2023 11.
Article in English | MEDLINE | ID: mdl-37827476

ABSTRACT

OBJECTIVE: Audit logs in electronic health record (EHR) systems capture interactions of providers with clinical data. We determine if machine learning (ML) models trained using audit logs in conjunction with clinical data ("observational supervision") outperform ML models trained using clinical data alone in clinical outcome prediction tasks, and whether they are more robust to temporal distribution shifts in the data. MATERIALS AND METHODS: Using clinical and audit log data from Stanford Healthcare, we trained and evaluated various ML models including logistic regression, support vector machine (SVM) classifiers, neural networks, random forests, and gradient boosted machines (GBMs) on clinical EHR data, with and without audit logs for two clinical outcome prediction tasks: major adverse kidney events within 120 days of ICU admission (MAKE-120) in acute kidney injury (AKI) patients and 30-day readmission in acute stroke patients. We further tested the best performing models using patient data acquired during different time-intervals to evaluate the impact of temporal distribution shifts on model performance. RESULTS: Performance generally improved for all models when trained with clinical EHR data and audit log data compared with those trained with only clinical EHR data, with GBMs tending to have the overall best performance. GBMs trained with clinical EHR data and audit logs outperformed GBMs trained without audit logs in both clinical outcome prediction tasks: AUROC 0.88 (95% CI: 0.85-0.91) vs. 0.79 (95% CI: 0.77-0.81), respectively, for MAKE-120 prediction in AKI patients, and AUROC 0.74 (95% CI: 0.71-0.77) vs. 0.63 (95% CI: 0.62-0.64), respectively, for 30-day readmission prediction in acute stroke patients. The performance of GBM models trained using audit log and clinical data degraded less in later time-intervals than models trained using only clinical data. CONCLUSION: Observational supervision with audit logs improved the performance of ML models trained to predict important clinical outcomes in patients with AKI and acute stroke, and improved robustness to temporal distribution shifts.


Subject(s)
Acute Kidney Injury , Stroke , Humans , Electronic Health Records , Hospitalization , Prognosis
15.
Front Digit Health ; 5: 1150687, 2023.
Article in English | MEDLINE | ID: mdl-37342866

ABSTRACT

Endometriosis is a chronic, complex disease for which there are vast disparities in diagnosis and treatment between sociodemographic groups. Clinical presentation of endometriosis can vary from asymptomatic disease-often identified during (in)fertility consultations-to dysmenorrhea and debilitating pelvic pain. Because of this complexity, delayed diagnosis (mean time to diagnosis is 1.7-3.6 years) and misdiagnosis is common. Early and accurate diagnosis of endometriosis remains a research priority for patient advocates and healthcare providers. Electronic health records (EHRs) have been widely adopted as a data source in biomedical research. However, they remain a largely untapped source of data for endometriosis research. EHRs capture diverse, real-world patient populations and care trajectories and can be used to learn patterns of underlying risk factors for endometriosis which, in turn, can be used to inform screening guidelines to help clinicians efficiently and effectively recognize and diagnose the disease in all patient populations reducing inequities in care. Here, we provide an overview of the advantages and limitations of using EHR data to study endometriosis. We describe the prevalence of endometriosis observed in diverse populations from multiple healthcare institutions, examples of variables that can be extracted from EHRs to enhance the accuracy of endometriosis prediction, and opportunities to leverage longitudinal EHR data to improve our understanding of long-term health consequences for all patients.

16.
Health Syst (Basingstoke) ; 12(2): 223-242, 2023.
Article in English | MEDLINE | ID: mdl-37234469

ABSTRACT

The widespread use of Blockchain technology (BT) in nations that are developing remains in its early stages, necessitating a more comprehensive evaluation using efficient and adaptable approaches. The need for digitalization to boost operational effectiveness is growing in the healthcare sector. Despite BT's potential as a competitive option for the healthcare sector, insufficient research has prevented it being fully utilised. This study intends to identify the main sociological, economical, and infrastructure obstacles to BT adoption in developing nations' public health systems. To accomplish this goal, the study employs a multi-level analysis of blockchain hurdles using hybrid approach. The study's findings provide decision- makers with guidance on how to proceed, as well as insight into implementation challenges.

17.
Front Neurol ; 14: 1108222, 2023.
Article in English | MEDLINE | ID: mdl-37153672

ABSTRACT

Objective: We retrospectively screened 350,116 electronic health records (EHRs) to identify suspected patients for Pompe disease. Using these suspected patients, we then describe their phenotypical characteristics and estimate the prevalence in the respective population covered by the EHRs. Methods: We applied Symptoma's Artificial Intelligence-based approach for identifying rare disease patients to retrospective anonymized EHRs provided by the "University Hospital Salzburg" clinic group. Within 1 month, the AI screened 350,116 EHRs reaching back 15 years from five hospitals, and 104 patients were flagged as probable for Pompe disease. Flagged patients were manually reviewed and assessed by generalist and specialist physicians for their likelihood for Pompe disease, from which the performance of the algorithms was evaluated. Results: Of the 104 patients flagged by the algorithms, generalist physicians found five "diagnosed," 10 "suspected," and seven patients with "reduced suspicion." After feedback from Pompe disease specialist physicians, 19 patients remained clinically plausible for Pompe disease, resulting in a specificity of 18.27% for the AI. Estimating from the remaining plausible patients, the prevalence of Pompe disease for the greater Salzburg region [incl. Bavaria (Germany), Styria (Austria), and Upper Austria (Austria)] was one in every 18,427 people. Phenotypes for patient cohorts with an approximated onset of symptoms above or below 1 year of age were established, which correspond to infantile-onset Pompe disease (IOPD) and late-onset Pompe disease (LOPD), respectively. Conclusion: Our study shows the feasibility of Symptoma's AI-based approach for identifying rare disease patients using retrospective EHRs. Via the algorithm's screening of an entire EHR population, a physician had only to manually review 5.47 patients on average to find one suspected candidate. This efficiency is crucial as Pompe disease, while rare, is a progressively debilitating but treatable neuromuscular disease. As such, we demonstrated both the efficiency of the approach and the potential of a scalable solution to the systematic identification of rare disease patients. Thus, similar implementation of this methodology should be encouraged to improve care for all rare disease patients.

18.
Stud Health Technol Inform ; 302: 192-196, 2023 May 18.
Article in English | MEDLINE | ID: mdl-37203645

ABSTRACT

The high investments in deploying a new Electronic Health Record (EHR) make it necessary to understand its effect on usability (effectiveness, efficiency, and user satisfaction). This paper describes the evaluation process related to user satisfaction over data gathered from three Northern Norway Health Trust hospitals. A questionnaire gathered responses about user satisfaction regarding the newly adopted EHR. A regression model reduces the number of satisfaction items from 15 to nine, where the result represents user EHR Features Satisfaction. The results show positive satisfaction with the newly introduced EHR, a result of proper EHR transition planning and the previous experience of the vendor with the hospitals involved.


Subject(s)
Electronic Health Records , User-Computer Interface , Hospitals , Personal Satisfaction , Commerce
19.
Front Pharmacol ; 14: 1110036, 2023.
Article in English | MEDLINE | ID: mdl-36825151

ABSTRACT

Objectives: To describe the sex and gender differences in the treatment initiation and in the socio-demographic and clinical characteristics of all patients initiating an oral anticoagulant (OAC), and the sex and gender differences in prescribed doses and adherence and persistence to the treatment of those receiving direct oral anticoagulants (DOAC). Material and methods: Cohort study including patients with non-valvular atrial fibrillation (NVAF) who initiated OAC in 2011-2020. Data proceed from SIDIAP, Information System for Research in Primary Care, in Catalonia, Spain. Results: 123,250 people initiated OAC, 46.9% women and 53.1% men. Women were older and the clinical characteristics differed between genders. Women had higher risk of stroke than men at baseline, were more frequently underdosed with DOAC and discontinued the DOAC less frequently than men. Conclusion: We described the dose adequacy of patients receiving DOAC, finding a high frequency of underdosing, and significantly higher in women in comparison with men. Adherence was generally high, only with higher levels in women for rivaroxaban. Persistence during the first year of treatment was also high in general, being significantly more persistent women than men in the case of dabigatran and edoxaban. Dose inadequacy, lack of adherence and of persistence can result in less effective and safe treatments. It is necessary to conduct studies analysing sex and gender differences in health and disease.

20.
Prim Care Diabetes ; 17(1): 43-47, 2023 02.
Article in English | MEDLINE | ID: mdl-36437216

ABSTRACT

AIMS: To identify substance use disorder (SUD) patterns and their association with T2DM health outcomes among patients with type 2 diabetes and hypertension. METHODS: We used latent class analysis on electronic health records from the MetroHealth System (Cleveland, Ohio) to obtain the target SUD groups: i) only tobacco (TUD), ii) tobacco and alcohol (TAUD), and iii) tobacco, alcohol, and at least one more substance (PSUD). A matching program with Mahalanobis distance within propensity score calipers created the matched control groups: no SUD (NSUD) for TUD and TUD for the other two SUD groups. The numbers of participants for the target-control groups were 8009 (TUD), 1672 (TAUD), and 642 (PSUD). RESULTS: TUD was significantly associated with T2DM complications. Compared to TUD, the TAUD group showed a significantly higher likelihood for all-cause mortality (adjusted odds ratio (aOR) = 1.46) but not for any of the T2DM complications. Compared to TUD, the PSUD group experienced a significantly higher risk for cerebrovascular accident (CVA) (aOR = 2.19), diabetic neuropathy (aOR = 1.76), myocardial infarction (MI) (aOR = 1.76), and all-cause mortality (aOR = 1.66). CONCLUSIONS: The findings of increased risk associated with PSUDs may provide insights for better management of patients with T2DM and hypertension co-occurrence.


Subject(s)
Diabetes Mellitus, Type 2 , Hypertension , Substance-Related Disorders , Tobacco Use Disorder , Humans , Diabetes Mellitus, Type 2/diagnosis , Diabetes Mellitus, Type 2/epidemiology , Diabetes Mellitus, Type 2/complications , Tobacco Use Disorder/complications , Electronic Health Records , Substance-Related Disorders/diagnosis , Substance-Related Disorders/epidemiology , Substance-Related Disorders/complications , Hypertension/diagnosis , Hypertension/epidemiology , Outcome Assessment, Health Care
SELECTION OF CITATIONS
SEARCH DETAIL
...