Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 121
Filter
1.
Life (Basel) ; 14(6)2024 May 21.
Article in English | MEDLINE | ID: mdl-38929638

ABSTRACT

Artificial intelligence models represented in machine learning algorithms are promising tools for risk assessment used to guide clinical and other health care decisions. Machine learning algorithms, however, may house biases that propagate stereotypes, inequities, and discrimination that contribute to socioeconomic health care disparities. The biases include those related to some sociodemographic characteristics such as race, ethnicity, gender, age, insurance, and socioeconomic status from the use of erroneous electronic health record data. Additionally, there is concern that training data and algorithmic biases in large language models pose potential drawbacks. These biases affect the lives and livelihoods of a significant percentage of the population in the United States and globally. The social and economic consequences of the associated backlash cannot be underestimated. Here, we outline some of the sociodemographic, training data, and algorithmic biases that undermine sound health care risk assessment and medical decision-making that should be addressed in the health care system. We present a perspective and overview of these biases by gender, race, ethnicity, age, historically marginalized communities, algorithmic bias, biased evaluations, implicit bias, selection/sampling bias, socioeconomic status biases, biased data distributions, cultural biases and insurance status bias, conformation bias, information bias and anchoring biases and make recommendations to improve large language model training data, including de-biasing techniques such as counterfactual role-reversed sentences during knowledge distillation, fine-tuning, prefix attachment at training time, the use of toxicity classifiers, retrieval augmented generation and algorithmic modification to mitigate the biases moving forward.

2.
Commun Biol ; 7(1): 529, 2024 May 04.
Article in English | MEDLINE | ID: mdl-38704509

ABSTRACT

Intra-organism biodiversity is thought to arise from epigenetic modification of constituent genes and post-translational modifications of translated proteins. Here, we show that post-transcriptional modifications, like RNA editing, may also contribute. RNA editing enzymes APOBEC3A and APOBEC3G catalyze the deamination of cytosine to uracil. RNAsee (RNA site editing evaluation) is a computational tool developed to predict the cytosines edited by these enzymes. We find that 4.5% of non-synonymous DNA single nucleotide polymorphisms that result in cytosine to uracil changes in RNA are probable sites for APOBEC3A/G RNA editing; the variant proteins created by such polymorphisms may also result from transient RNA editing. These polymorphisms are associated with over 20% of Medical Subject Headings across ten categories of disease, including nutritional and metabolic, neoplastic, cardiovascular, and nervous system diseases. Because RNA editing is transient and not organism-wide, future work is necessary to confirm the extent and effects of such editing in humans.


Subject(s)
APOBEC Deaminases , Cytidine Deaminase , RNA Editing , Humans , Cytidine Deaminase/metabolism , Cytidine Deaminase/genetics , Polymorphism, Single Nucleotide , Cytosine/metabolism , APOBEC-3G Deaminase/metabolism , APOBEC-3G Deaminase/genetics , Uracil/metabolism , Proteins/genetics , Proteins/metabolism , Cytosine Deaminase/genetics , Cytosine Deaminase/metabolism
3.
JMIR Public Health Surveill ; 10: e49841, 2024 Apr 30.
Article in English | MEDLINE | ID: mdl-38687984

ABSTRACT

BACKGROUND: There have been over 772 million confirmed cases of COVID-19 worldwide. A significant portion of these infections will lead to long COVID (post-COVID-19 condition) and its attendant morbidities and costs. Numerous life-altering complications have already been associated with the development of long COVID, including chronic fatigue, brain fog, and dangerous heart rhythms. OBJECTIVE: We aim to derive an actionable long COVID case definition consisting of significantly increased signs, symptoms, and diagnoses to support pandemic-related clinical, public health, research, and policy initiatives. METHODS: This research employs a case-crossover population-based study using International Classification of Diseases, 10th Revision, Clinical Modification (ICD-10-CM) data generated at Veterans Affairs medical centers nationwide between January 1, 2020, and August 18, 2022. In total, 367,148 individuals with ICD-10-CM data both before and after a positive COVID-19 test were selected for analysis. We compared ICD-10-CM codes assigned 1 to 7 months following each patient's positive test with those assigned up to 6 months prior. Further, 350,315 patients had novel codes assigned during this window of time. We defined signs, symptoms, and diagnoses as being associated with long COVID if they had a novel case frequency of ≥1:1000, and they significantly increased in our entire cohort after a positive test. We present odds ratios with CIs for long COVID signs, symptoms, and diagnoses, organized by ICD-10-CM functional groups and medical specialty. We used our definition to assess long COVID risk based on a patient's demographics, Elixhauser score, vaccination status, and COVID-19 disease severity. RESULTS: We developed a long COVID definition consisting of 323 ICD-10-CM diagnosis codes grouped into 143 ICD-10-CM functional groups that were significantly increased in our 367,148 patient post-COVID-19 population. We defined 17 medical-specialty long COVID subtypes such as cardiology long COVID. Patients who were COVID-19-positive developed signs, symptoms, or diagnoses included in our long COVID definition at a proportion of at least 59.7% (268,320/449,450, based on a denominator of all patients who were COVID-19-positive). The long COVID cohort was 8 years older with more comorbidities (2-year Elixhauser score 7.97 in the patients with long COVID vs 4.21 in the patients with non-long COVID). Patients who had a more severe bout of COVID-19, as judged by their minimum oxygen saturation level, were also more likely to develop long COVID. CONCLUSIONS: An actionable, data-driven definition of long COVID can help clinicians screen for and diagnose long COVID, allowing identified patients to be admitted into appropriate monitoring and treatment programs. This long COVID definition can also support public health, research, and policy initiatives. Patients with COVID-19 who are older or have low oxygen saturation levels during their bout of COVID-19, or those who have multiple comorbidities should be preferentially watched for the development of long COVID.


Subject(s)
COVID-19 , Cross-Over Studies , Post-Acute COVID-19 Syndrome , Humans , COVID-19/epidemiology , COVID-19/complications , Risk Factors , Male , Female , Middle Aged , United States/epidemiology , Aged , International Classification of Diseases , Adult
4.
Subst Use ; 18: 11782218231223673, 2024.
Article in English | MEDLINE | ID: mdl-38433747

ABSTRACT

Reportedly, various urine manipulations can be performed by opioid use disorder (OUD) patients who are on buprenorphine/naloxone medications to disguise their non-compliance to the treatment. One type of manipulation is known as "spiking" adulteration, directly dipping a buprenorphine/naloxone film into urine. Identifying this type of urine manipulation has been the aim of many previous studies. These studies have revealed urine adulterations through inappropriately high levels of "buprenorphine" and "naloxone" and a very small amount of "norbuprenorphine." So, does the small amount of "norbuprenorphine" in the adulterated urine samples result from dipped buprenorphine/naloxone film, or is it a residual metabolite of buprenorphine in the patient's system? This pilot study utilized 12 urine samples from 12 participants, as well as water samples as a control. The samples were subdivided by the dipping area and time, as well as the temperature and concentration of urine samples, and each sublingual generic buprenorphine/naloxone film was dipped directly into the samples. Then, the levels of "buprenorphine," "norbuprenorphine," "naloxone," "buprenorphine-glucuronide" and "norbuprenorphine-glucuronide" were examined by Liquid Chromatography with tandem mass spectrometry (LC-MS/MS). The results of this study showed that high levels of "buprenorphine" and "naloxone" and a small amount of "norbuprenorphine" were detected in both urine and water samples when the buprenorphine/naloxone film was dipped directly into these samples. However, no "buprenorphine-glucuronide" or "norbuprenorphine-glucuronide" were detected in any of the samples. In addition, the area and timing of dipping altered "buprenorphine" and "naloxone" levels, but concentration and temperature did not. This study's findings could help providers interpret their patients' urine drug test results more accurately, which then allows them to monitor patient compliance and help them identify manipulation by examining patient urine test results.

5.
J Clin Transl Sci ; 8(1): e39, 2024.
Article in English | MEDLINE | ID: mdl-38476245

ABSTRACT

Objective: Social Determinants of Health (SDOH) greatly influence health outcomes. SDOH surveys, such as the Assessing Circumstances & Offering Resources for Needs (ACORN) survey, have been developed to screen for SDOH in Veterans. The purpose of this study is to determine the terminological representation of the ACORN survey, to aid in natural language processing (NLP). Methods: Each ACORN survey question was read to determine its concepts. Next, Solor was searched for each of the concepts and for the appropriate attributes. If no attributes or concepts existed, they were proposed. Then, each question's concepts and attributes were arranged into subject-relation-object triples. Results: Eleven unique attributes and 18 unique concepts were proposed. These results demonstrate a gap in representing SDOH with terminologies. We believe that using these new concepts and relations will improve NLP, and thus, the care provided to Veterans.

6.
JMIR Med Inform ; 12: e42271, 2024 Feb 14.
Article in English | MEDLINE | ID: mdl-38354033

ABSTRACT

BACKGROUND: Infants born at extremely preterm gestational ages are typically admitted to the neonatal intensive care unit (NICU) after initial resuscitation. The subsequent hospital course can be highly variable, and despite counseling aided by available risk calculators, there are significant challenges with shared decision-making regarding life support and transition to end-of-life care. Improving predictive models can help providers and families navigate these unique challenges. OBJECTIVE: Machine learning methods have previously demonstrated added predictive value for determining intensive care unit outcomes, and their use allows consideration of a greater number of factors that potentially influence newborn outcomes, such as maternal characteristics. Machine learning-based models were analyzed for their ability to predict the survival of extremely preterm neonates at initial admission. METHODS: Maternal and newborn information was extracted from the health records of infants born between 23 and 29 weeks of gestation in the Medical Information Mart for Intensive Care III (MIMIC-III) critical care database. Applicable machine learning models predicting survival during the initial NICU admission were developed and compared. The same type of model was also examined using only features that would be available prepartum for the purpose of survival prediction prior to an anticipated preterm birth. Features most correlated with the predicted outcome were determined when possible for each model. RESULTS: Of included patients, 37 of 459 (8.1%) expired. The resulting random forest model showed higher predictive performance than the frequently used Score for Neonatal Acute Physiology With Perinatal Extension II (SNAPPE-II) NICU model when considering extremely preterm infants of very low birth weight. Several other machine learning models were found to have good performance but did not show a statistically significant difference from previously available models in this study. Feature importance varied by model, and those of greater importance included gestational age; birth weight; initial oxygenation level; elements of the APGAR (appearance, pulse, grimace, activity, and respiration) score; and amount of blood pressure support. Important prepartum features also included maternal age, steroid administration, and the presence of pregnancy complications. CONCLUSIONS: Machine learning methods have the potential to provide robust prediction of survival in the context of extremely preterm births and allow for consideration of additional factors such as maternal clinical and socioeconomic information. Evaluation of larger, more diverse data sets may provide additional clarity on comparative performance.

7.
bioRxiv ; 2023 Jul 31.
Article in English | MEDLINE | ID: mdl-37577456

ABSTRACT

Intra-organism biodiversity is thought to arise from epigenetic modification of our constituent genes and post-translational modifications after mRNA is translated into proteins. We have found that post-transcriptional modification, also known as RNA editing, is also responsible for a significant amount of our biodiversity, substantively expanding this story. The APOBEC (apolipoprotein B mRNA editing catalytic polypeptide-like) family RNA editing enzymes APOBEC3A and APOBEC3G catalyze the deamination of cytosines to uracils (C>U) in specific stem-loop structures.1,2 We used RNAsee (RNA site editing evaluation), a tool developed to predict the locations of APOBEC3A/G RNA editing sites, to determine whether known single nucleotide polymorphisms (SNPs) in DNA could be replicated in RNA via RNA editing. About 4.5% of non-synonymous SNPs which result in C>U changes in RNA, and about 5.4% of such SNPs labelled as pathogenic, were identified as probable sites for APOBEC3A/G editing. This suggests that the variant proteins created by these DNA mutations may also be created by transient RNA editing, with the potential to affect human health. Those SNPs identified as potential APOBEC3A/G-mediated RNA editing sites were disproportionately associated with cardiovascular diseases, digestive system diseases, and musculoskeletal diseases. Future work should focus on common sites of RNA editing, any variant proteins created by these RNA editing sites, and the effects of these variants on protein diversity and human health. Classically, our biodiversity is thought to come from our constitutive genetics, epigenetic phenomenon, transcriptional differences, and post-translational modification of proteins. Here, we have shown evidence that RNA editing, often stimulated by environmental factors, could account for a significant degree of the protein biodiversity leading to human disease. In an era where worries about our changing environment are ever increasing, from the warming of our climate to the emergence of new diseases to the infiltration of microplastics and pollutants into our bodies, understanding how environmentally sensitive mechanisms like RNA editing affect our own cells is essential.

8.
J Biomed Inform ; 144: 104443, 2023 08.
Article in English | MEDLINE | ID: mdl-37455008

ABSTRACT

OBJECTIVE: Despite the high prevalence of alcohol use disorder (AUD) in the United States, limited research is focused on the associations among AUD, pain, and opioids/benzodiazepine use. In addition, little is known regarding individuals with a history of AUD and their potential risk for pain diagnoses, pain prescriptions, and subsequent misuse. Moreover, the potential risk of pain diagnoses, prescriptions, and subsequent misuse among individuals with a history of AUD is not well known. The objective was to develop a tailored dataset by linking data from 2 New York State (NYS) administrative databases to investigate a series of hypotheses related to AUD and painful medical disorders. METHODS: Data from the NYS Office of Addiction Services and Supports (OASAS) Client Data System (CDS) and Medicaid claims data from the NYS Department of Health Medicaid Data Warehouse (MDW) were merged using a stepwise deterministic method. Multiple patient-level identifier combinations were applied to create linkage rules. We included patients aged 18 and older from the OASAS CDS who initially entered treatment with a primary substance use of alcohol and no use of opioids between January 1, 2003, and September 23, 2019. This cohort was then linked to corresponding Medicaid claims. RESULTS: A total of 177,685 individuals with a primary AUD problem and no opioid use history were included in the dataset. Of these, 37,346 (21.0%) patients had an OUD diagnosis, and 3,365 (1.9%) patients experienced an opioid overdose. There were 121,865 (68.6%) patients found to have a pain condition. CONCLUSION: The integrated database allows researchers to examine the associations among AUD, pain, and opioids/benzodiazepine use, and propose hypotheses to improve outcomes for at-risk patients. The findings of this study can contribute to the development of a prognostic prediction model and the analysis of longitudinal outcomes to improve the care of patients with AUD.


Subject(s)
Alcoholism , Opioid-Related Disorders , Humans , United States/epidemiology , Analgesics, Opioid/therapeutic use , Alcoholism/diagnosis , Alcoholism/epidemiology , Alcoholism/drug therapy , New York/epidemiology , Information Sources , Opioid-Related Disorders/therapy , Opioid-Related Disorders/drug therapy , Pain/drug therapy , Pain/epidemiology , Pain/chemically induced , Benzodiazepines
9.
Stud Health Technol Inform ; 304: 21-25, 2023 Jun 22.
Article in English | MEDLINE | ID: mdl-37347563

ABSTRACT

Perceptions of errors associated with healthcare information technology (HIT) often depend on the context and position of the viewer. HIT vendors posit very different causes of errors than clinicians, implementation teams, or IT staff. Even within the same hospital, members of departments and services often implicate other departments. Organizations may attribute errors to external care partners that refer patients, such as nursing homes or outside clinics. Also, the various clinical roles within an organization (e.g., physicians, nurses, pharmacists) can conceptualize errors and their root causes differently. Overarching all these perceptual factors, the definitions, mechanisms, and incidence of HIT-related errors are remarkably conflictual. There is neither a universal standard for defining or counting these errors. This paper attempts to enumerate and clarify the issues related to differential perceptions of medical errors associated with HIT. It then suggests solutions.


Subject(s)
Electronic Health Records , Medical Errors , Humans , Hospitals
10.
J Clin Transl Sci ; 7(1): e55, 2023.
Article in English | MEDLINE | ID: mdl-37008615

ABSTRACT

Introduction: It is important for SARS-CoV-2 vaccine providers, vaccine recipients, and those not yet vaccinated to be well informed about vaccine side effects. We sought to estimate the risk of post-vaccination venous thromboembolism (VTE) to meet this need. Methods: We conducted a retrospective cohort study to quantify excess VTE risk associated with SARS-CoV-2 vaccination in US veterans age 45 and older using data from the Department of Veterans Affairs (VA) National Surveillance Tool. The vaccinated cohort received at least one dose of a SARS-CoV-2 vaccine at least 60 days prior to 3/06/22 (N = 855,686). The control group was those not vaccinated (N = 321,676). All patients were COVID-19 tested at least once before vaccination with a negative test. The main outcome was VTE documented by ICD10-CM codes. Results: Vaccinated persons had a VTE rate of 1.3755 (CI: 1.3752-1.3758) per thousand, which was 0.1 percent over the baseline rate of 1.3741 (CI: 1.3738-1.3744) per thousand in the unvaccinated patients, or 1.4 excess cases per 1,000,000. All vaccine types showed a minimal increased rate of VTE (rate of VTE per 1000 was 1.3761 (CI: 1.3754-1.3768) for Janssen; 1.3757 (CI: 1.3754-1.3761) for Pfizer, and for Moderna, the rate was 1.3757 (CI: 1.3748-1.3877)). The tiny differences in rates comparing either Janssen or Pfizer vaccine to Moderna were statistically significant (p < 0.001). Adjusting for age, sex, BMI, 2-year Elixhauser score, and race, the vaccinated group had a minimally higher relative risk of VTE as compared to controls (1.0009927 CI: 1.007673-1.0012181; p < 0.001). Conclusion: The results provide reassurance that there is only a trivial increased risk of VTE with the current US SARS-CoV-2 vaccines used in veterans older than age 45. This risk is significantly less than VTE risk among hospitalized COVID-19 patients. The risk-benefit ratio favors vaccination, given the VTE rate, mortality, and morbidity associated with COVID-19 infection.

11.
Subst Abuse ; 17: 11782218231153748, 2023.
Article in English | MEDLINE | ID: mdl-36937705

ABSTRACT

Background: Utilizing a 1-year chart review as the data, Furo et al. conducted a research study on an association between buprenorphine dose and the urine "norbuprenorphine" to "creatinine" ratio and found significant differences in the ratio among 8-, 12-, and 16-mg/day groups with an analysis of variance (ANOVA) test. This study expands the data for a 2-year chart review and is intended to delineate an association between buprenorphine dose and the urine "norbuprenorphine" to "creatinine" ratio with a higher statistical power. Methods: This study performed a 2-year chart review of data for the patients living in a halfway house setting, where their drug administration was closely monitored. The patients were on buprenorphine prescribed at an outpatient clinic for opioid use disorder (OUD), and their buprenorphine prescription and dispensing information were confirmed by the New York Prescription Drug Monitoring Program (PDMP). Urine test results in the electronic health record (EHR) were reviewed, focusing on the "buprenorphine," "norbuprenorphine," and "creatinine" levels. The Kruskal-Wallis H and Mann-Whitney U tests were performed to examine an association between buprenorphine dose and the "norbuprenorphine" to "creatinine" ratio. Results: This study included 371 urine samples from 61 consecutive patients and analyzed the data in a manner similar to that described in the study by Furo et al. This study had similar findings with the following exceptions: (1) a mean buprenorphine dose of 11.0 ± 3.8 mg/day with a range of 2 to 20 mg/day; (2) exclusion of 6 urine samples with "creatinine" level <20 mg/dL; (3) minimum "norbuprenorphine" to "creatinine" ratios in the 8-, 12-, and 16-mg/day groups of 0.44 × 10-4 (n = 68), 0.1 × 10-4 (n = 133), and 1.37 × 10-4 (n = 82), respectively; however, after removing the 2 lowest outliers, the minimum "norbuprenorphine" to "creatinine" ratio in the 12-mg/day group was 1.6 × 10-4, similar to the findings in the previous study; and (4) a significant association between buprenorphine dose and the urine "norbuprenorphine" to "creatinine" ratios from the Kruskal-Wallis test (P < .01). In addition, the median "norbuprenorphine" to "creatinine" ratio had a strong association with buprenorphine dose, and this association could be formulated as: [y = 2.266 ln(x) + 0.8211]. In other words, the median ratios in 8-, 12-, and 16-mg/day groups were 5.53 × 10-4, 6.45 × 10-4, and 7.10 × 10-4, respectively. Therefore, any of the following features should alert providers to further investigate patient treatment compliance: (1) inappropriate substance(s) in urine sample; (2) "creatinine" level <20 mg/dL; (3) "buprenorphine" to "norbuprenorphine" ratio >50:1; (4) buprenorphine dose >24 mg/day; or (5) "norbuprenorphine" to "creatinine" ratios <0.5 × 10-4 in patients who are on 8 mg/day or <1.5 × 10-4 in patients who are on 12 mg/day or more. Conclusion: The results of the present study confirmed those of the previous study regarding an association between buprenorphine dose and the "norbuprenorphine" to "creatinine" ratio, using an expanded data set. Additionally, this study delineated a clearer relationship, focusing on the median "norbuprenorphine" to "creatinine" ratios in different buprenorphine dose groups. These results could help providers interpret urine test results more accurately and apply them to outpatient opioid treatment programs for optimal treatment outcomes.

12.
Surg Open Sci ; 12: 29-34, 2023 Mar.
Article in English | MEDLINE | ID: mdl-36926590

ABSTRACT

Background: Acute postoperative pain is common following many types of surgery, and a significant subset of patients experience severe pain, which can be difficult to manage and result in postoperative complications. Opioid agonists are commonly used to treat severe postoperative pain, but their use has been associated with adverse outcomes. This retrospective study uses data from the Veterans Administration Surgical Quality Improvement Project (VASQIP) database to develop a postoperative Pain Severity Scale (PSS) based on subjective pain reports and postoperative opioid requirements. Methods: Postoperative pain scores and opioid prescription data were extracted from the VASQIP database for surgeries occurring between 2010 and 2020. Procedures were grouped by surgical Common Procedural Terminology (CPT) codes, and a total of 165,321 surgical procedures were examined, representing 1141 distinct CPT codes. K-means clustering analysis was used to group the surgeries based on 24-h maximum pain, 72-h average pain, and postoperative opioid prescriptions. Results: K-means clustering analysis showed two optimal grouping strategies; one with 3 and the other with 5 groups. Both clustering strategies produced a PSS that categorized surgical procedures with generally increasing pain scores and opioid requirements. The 5-group PSS accurately captured typical postoperative pain experience across a range of procedures. Conclusions: K-means clustering produced a Pain Severity Scale that can distinguish typical postoperative pain for a large variety of surgical procedures based on subjective and objective clinical data. The PSS will facilitate research into the optimal postoperative pain management and could be used in the development of clinical decision support tools.

13.
J Gen Intern Med ; 38(1): 138-146, 2023 01.
Article in English | MEDLINE | ID: mdl-35650469

ABSTRACT

BACKGROUND: Alcohol use disorder (AUD) is a highly prevalent public health problem that contributes to opioid- and benzodiazepine-related morbidity and mortality. Even though co-utilization of these substances is particularly harmful, data are sparse on opioid or benzodiazepine prescribing patterns among individuals with AUD. OBJECTIVE: To estimate temporal trends and disparities in opioid, benzodiazepine, and opioid/benzodiazepine co-prescribing among individuals with AUD in New York State (NYS). DESIGN/PARTICIPANTS: Serial cross-sectional study analyzing merged data from the NYS Office of Addiction Services and Supports (OASAS) and the NYS Department of Health Medicaid Data Warehouse. Subjects with a first admission to an OASAS treatment program from 2005-2018 and a primary AUD were included. A total of 148,328 subjects were identified. MEASURES: Annual prescribing rates of opioids, benzodiazepines, or both between the pre- (2005-2012) and post- (2013-2018) Internet System for Tracking Over-Prescribing (I-STOP) periods. I-STOP is a prescription monitoring program implemented in NYS in August 2013. Analyses were stratified based on sociodemographic factors (age, sex, race/ethnicity, and location). RESULTS: Opioid prescribing rates decreased between the pre- and post-I-STOP periods from 25.1% (95% CI, 24.9-25.3%) to 21.3% (95% CI, 21.2-21.4; P <.001), while benzodiazepine (pre: 9.96% [95% CI, 9.83-10.1%], post: 9.92% [95% CI, 9.83-10.0%]; P =.631) and opioid/benzodiazepine prescribing rates remained unchanged (pre: 3.01% vs. post: 3.05%; P =.403). After I-STOP implementation, there was a significant decreasing trend in opioid (change, -1.85% per year, P <.0001), benzodiazepine (-0.208% per year, P =.0184), and opioid/benzodiazepine prescribing (-0.267% per year, P <.0001). Opioid, benzodiazepine, and co-prescription rates were higher in females, White non-Hispanics, and rural regions. CONCLUSIONS: Among those with AUD, opioid prescribing decreased following NYS I-STOP program implementation. While both benzodiazepine and opioid/benzodiazepine co-prescribing rates remained high, a decreasing trend was evident after program implementation. Continuing high rates of opioid and benzodiazepine prescribing necessitate the development of innovative approaches to improve the quality of care.


Subject(s)
Alcoholism , Analgesics, Opioid , Female , United States , Adult , Humans , Analgesics, Opioid/therapeutic use , New York/epidemiology , Alcoholism/drug therapy , Benzodiazepines/therapeutic use , Cross-Sectional Studies , Practice Patterns, Physicians' , Drug Prescriptions
15.
J Clin Transl Sci ; 6(1): e74, 2022.
Article in English | MEDLINE | ID: mdl-35836784

ABSTRACT

Introduction: COVID-19 is a major health threat around the world causing hundreds of millions of infections and millions of deaths. There is a pressing global need for effective therapies. We hypothesized that leukotriene inhibitors (LTIs), that have been shown to lower IL6 and IL8 levels, may have a protective effect in patients with COVID-19. Methods: In this retrospective controlled cohort study, we compared death rates in COVID-19 patients who were taking a LTI with those who were not taking an LTI. We used the Department of Veterans Affairs (VA) Corporate Data Warehouse (CDW) to create a cohort of COVID-19-positive patients and tracked their use of LTIs between November 1, 2019 and November 11, 2021. Results: Of the 1,677,595 cohort of patients tested for COVID-19, 189,195 patients tested positive for COVID-19. Forty thousand seven hundred one were admitted. 38,184 had an oxygen requirement and 1214 were taking an LTI. The use of dexamethasone plus a LTI in hospital showed a survival advantage of 13.5% (CI: 0.23%-26.7%; p < 0.01) in patients presenting with a minimal O2Sat of 50% or less. For patients with an O2Sat of <60 and <50% if they were on LTIs as outpatients, continuing the LTI led to a 14.4% and 22.25 survival advantage if they were continued on the medication as inpatients. Conclusions: When combined dexamethasone and LTIs provided a mortality benefit in COVID-19 patients presenting with an O2 saturations <50%. The LTI cohort had lower markers of inflammation and cytokine storm.

16.
Stud Health Technol Inform ; 294: 465-469, 2022 May 25.
Article in English | MEDLINE | ID: mdl-35612123

ABSTRACT

Order sets that adhere to disease-specific guidelines have been shown to increase clinician efficiency and patient safety but curating these order sets, particularly for consistency across multiple sites, is difficult and time consuming. We created software called CDS-Compare to alleviate the burden on expert reviewers in rapidly and effectively curating large databases of order sets. We applied our clustering-based software to a database of NLP-processed order sets extracted from VA's Electronic Health Record, then had subject-matter experts review the web application version of our software for clustering validity.


Subject(s)
Machine Learning , Software , Databases, Factual , Electronic Health Records , Humans
17.
J Thorac Cardiovasc Surg ; 164(5): 1318-1326.e3, 2022 11.
Article in English | MEDLINE | ID: mdl-35469597

ABSTRACT

BACKGROUND: Non-small cell lung cancer (NSCLC) continues to be a major cause of cancer deaths. Previous investigation has suggested that metformin use can contribute to improved outcomes in NSCLC patients. However, this association is not uniform in all analyzed cohorts, implying that patient characteristics might lead to disparate results. Identification of patient characteristics that affect the association of metformin use with clinical benefit might clarify the drug's effect on lung cancer outcomes and lead to more rational design of clinical trials of metformin's utility as an intervention. In this study, we examined the association of metformin use with long-term mortality benefit in patients with NSCLC and the possible modulation of this benefit by body mass index (BMI) and smoking status, controlling for other clinical covariates. METHODS: This was a retrospective cohort study in which we analyzed data from the Veterans Affairs (VA) Tumor Registry in the United States. Data from all patients with stage I NSCLC from 2000 to 2016 were extracted from a national database, the Corporate Data Warehouse that captures data from all patients, primarily male, who underwent treatment through the VA health system in the United States. Metformin use was measured according to metformin prescriptions dispensed to patients in the VA health system. The association of metformin use with overall survival (OS) after diagnosis of stage I NSCLC was examined. Patients were further stratified according to BMI and smoking status (previous vs current) to examine the association of metformin use with OS across these strata. RESULTS: Metformin use was associated with improved survival in patients with stage I NSCLC (average hazard ratio, 0.82; P < .001). An interaction between the effect of metformin use and BMI on OS was observed (χ2 = 3268.42; P < .001) with a greater benefit of metformin use observed in patients as BMI increased. Similarly, an interaction between smoking status and metformin use on OS was also observed (χ2 = 2997.05; P < .001) with a greater benefit of metformin use observed in previous smokers compared with current smokers. CONCLUSIONS: In this large retrospective study, we showed that a survival benefit is enjoyed by users of metformin in a robust stage I NSCLC patient population treated in the VA health system. Metformin use was associated with an 18% improved OS. This association was stronger in patients with a higher BMI and in previous smokers. These observations deserve further mechanistic study and can help rational design of clinical trials with metformin in patients with lung cancer.


Subject(s)
Carcinoma, Non-Small-Cell Lung , Lung Neoplasms , Metformin , Carcinoma, Non-Small-Cell Lung/pathology , Humans , Lung Neoplasms/pathology , Male , Metformin/therapeutic use , Neoplasm Staging , Proportional Hazards Models , Retrospective Studies , United States
18.
Subst Abuse ; 15: 11782218211061749, 2021.
Article in English | MEDLINE | ID: mdl-34898987

ABSTRACT

BACKGROUND: Treatment progress is routinely monitored by urine testing in patients with opioid use disorder (OUD) undergoing buprenorphine medication-assisted treatment (MAT). However, interpretation of urine test results could be challenging. This retrospective study aims to examine the results of quantitative buprenorphine, norbuprenorphine, and creatinine levels in urine testing in relation to sublingual buprenorphine dosage to facilitate an accurate interpretation of urine testing results. METHODS: We reviewed the medical charts of 41 consecutive patients, who were residing in halfway houses where their medication intake was closely monitored and who had enrolled in an office-based MAT program at an urban clinic between July 2018 and June 2019. The patients' urine testing results were reviewed, and demographic variables were recorded. We focused on the patients treated with 8-, 12-, or 16-mg/day of buprenorphine, examining their urine buprenorphine, norbuprenorphine, and creatinine levels. Analysis of variance tested the statistical association between the dosage and urine testing results on the norbuprenorphine-to-creatinine ratio. RESULTS: A total of 240 urine samples from 41 patients were included for this study. The 41 patients received a mean buprenorphine dose of 10.5 ± 3.7 mg/day (range, 4-20 mg/day). Then, this study examined the distribution of the 240 urine samples and then focused on 184 urine samples that came from the 33 patients who were treated with 8-, 12-, and 16-mg/day of buprenorphine, the 3 most common dosages. All of the 184 urine samples had a creatinine level of >20 mg/dL and buprenorphine-to-norbuprenorphine ratio <50:1. The average norbuprenorphine-to-creatinine ratio in the 8 mg/day dosage group was 3.85 ± 2.24 × 10-4 (n = 66; range, 0.44-11.12). The respective ratios in the 12- and 16-mg dosage groups were 5.64 ± 3.40 × 10-4 (n = 83; range, 1.55-22.72) and 6.23 ± 4.92 × 10-4 (n = 35; range, 1.37-27.12). The 3 dosage groups differed significantly in the mean ratios (P < .01), except when the 12- and 16-mg dosage groups were compared (P = .58). The results of this study thus suggest that prescribers should pay attention to the following features: (1) unexpected substance(s) in urine testing, (2) creatinine level under 20 mg/dL, (3) buprenorphine-to-creatinine ratio over 50:1, (4) buprenorphine dosage over 24 mg/day, and (5) norbuprenorphine-to-creatinine ratio consistently under 0.5 × 10-4 in patients treated with 8 mg/day or 1.5 × 10-4 in patients treated with 12 mg/day or more. CONCLUSION: This study suggested parameters for interpreting quantitative urine test results in relation to buprenorphine intake dose in office-based opioid treatment programs.

19.
J Med Internet Res ; 23(11): e28946, 2021 11 09.
Article in English | MEDLINE | ID: mdl-34751659

ABSTRACT

BACKGROUND: Nonvalvular atrial fibrillation (NVAF) affects almost 6 million Americans and is a major contributor to stroke but is significantly undiagnosed and undertreated despite explicit guidelines for oral anticoagulation. OBJECTIVE: The aim of this study is to investigate whether the use of semisupervised natural language processing (NLP) of electronic health record's (EHR) free-text information combined with structured EHR data improves NVAF discovery and treatment and perhaps offers a method to prevent thousands of deaths and save billions of dollars. METHODS: We abstracted 96,681 participants from the University of Buffalo faculty practice's EHR. NLP was used to index the notes and compare the ability to identify NVAF, congestive heart failure, hypertension, age ≥75 years, diabetes mellitus, stroke or transient ischemic attack, vascular disease, age 65 to 74 years, sex category (CHA2DS2-VASc), and Hypertension, Abnormal liver/renal function, Stroke history, Bleeding history or predisposition, Labile INR, Elderly, Drug/alcohol usage (HAS-BLED) scores using unstructured data (International Classification of Diseases codes) versus structured and unstructured data from clinical notes. In addition, we analyzed data from 63,296,120 participants in the Optum and Truven databases to determine the NVAF frequency, rates of CHA2DS2­VASc ≥2, and no contraindications to oral anticoagulants, rates of stroke and death in the untreated population, and first year's costs after stroke. RESULTS: The structured-plus-unstructured method would have identified 3,976,056 additional true NVAF cases (P<.001) and improved sensitivity for CHA2DS2-VASc and HAS-BLED scores compared with the structured data alone (P=.002 and P<.001, respectively), causing a 32.1% improvement. For the United States, this method would prevent an estimated 176,537 strokes, save 10,575 lives, and save >US $13.5 billion. CONCLUSIONS: Artificial intelligence-informed bio-surveillance combining NLP of free-text information with structured EHR data improves data completeness, prevents thousands of strokes, and saves lives and funds. This method is applicable to many disorders with profound public health consequences.


Subject(s)
Atrial Fibrillation , Stroke , Aged , Anticoagulants , Artificial Intelligence , Atrial Fibrillation/drug therapy , Atrial Fibrillation/prevention & control , Case-Control Studies , Electronic Health Records , Humans , Natural Language Processing , Risk Assessment , Risk Factors , Stroke/prevention & control
20.
Stud Health Technol Inform ; 286: 3-8, 2021 Nov 08.
Article in English | MEDLINE | ID: mdl-34755681

ABSTRACT

The COVID-19 pandemic has disrupted many global industries and shifted the digital health landscape by stimulating and accelerating the delivery of digital care. It has emphasized the need for a system level informatics implementation that supports the healthcare management of populations at a macro level while also providing the necessary support for front line care delivery at a micro level. From data dashboard to Telemedicine, this crisis has necessitated the need for health informatics transformation that can bridge time and space to provide timely care. However, heath transformation cannot solely rely on Health Information Technology (HIT) for progress, but rather success must be an outcome of system design focus on the contextual complexity of the health system where HIT is used. This conference highlights the important roles context plays for health informatics in global pandemics and aims to answer critical questions in four main areas: 1) health information management in the covid-19 context, 2) implementation of new practices and technologies in healthcare, 3) sociotechnical analysis of task performance and workload in healthcare, and 4) innovations in design and evaluation methods of health technologies. We deem this as a call to action to understand the importance of context while solving the last mile problem in delivering the informatics solutions that are needed to support our public health response.


Subject(s)
COVID-19 , Medical Informatics , Telemedicine , Humans , Pandemics , SARS-CoV-2
SELECTION OF CITATIONS
SEARCH DETAIL
...