Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 6 de 6
Filter
Add more filters










Database
Language
Publication year range
1.
Int J Med Inform ; 171: 104979, 2023 03.
Article in English | MEDLINE | ID: mdl-36621078

ABSTRACT

OBJECTIVE: As the opioid epidemic continues across the United States, methods are needed to accurately and quickly identify patients at risk for opioid use disorder (OUD). The purpose of this study is to develop two predictive algorithms: one to predict opioid prescription and one to predict OUD. MATERIALS AND METHODS: We developed an informatics algorithm that trains two deep learning models over patient Electronic Health Records (EHRs) using the MIMIC-III database. We utilize both the structured and unstructured parts of the EHR and show that it is possible to predict both challenging outcomes. RESULTS: Our deep learning models incorporate elements from EHRs to predict opioid prescription with an F1-score of 0.88 ± 0.003 and an AUC-ROC of 0.93 ± 0.002. We also constructed a model to predict OUD diagnosis achieving an F1-score of 0.82 ± 0.05 and AUC-ROC of 0.94 ± 0.008. DISCUSSION: Our model for OUD prediction outperformed prior algorithms for specificity, F1 score and AUC-ROC while achieving equivalent sensitivity. This demonstrates the importance of a) deep learning approaches in predicting OUD and b) incorporating both structured and unstructured data for this prediction task. No prediction models for opioid prescription as an outcome were found in the literature and therefore our model is the first to predict opioid prescribing behavior. CONCLUSION: Algorithms such as those described in this paper will become increasingly important to understand the drivers underlying this national epidemic.


Subject(s)
Deep Learning , Opioid-Related Disorders , Humans , United States , Analgesics, Opioid/therapeutic use , Electronic Health Records , Machine Learning , Practice Patterns, Physicians' , Opioid-Related Disorders/diagnosis , Opioid-Related Disorders/epidemiology , Prescriptions
2.
PLoS One ; 15(12): e0240376, 2020.
Article in English | MEDLINE | ID: mdl-33332380

ABSTRACT

BACKGROUND: The rapid integration of Artificial Intelligence (AI) into the healthcare field has occurred with little communication between computer scientists and doctors. The impact of AI on health outcomes and inequalities calls for health professionals and data scientists to make a collaborative effort to ensure historic health disparities are not encoded into the future. We present a study that evaluates bias in existing Natural Language Processing (NLP) models used in psychiatry and discuss how these biases may widen health inequalities. Our approach systematically evaluates each stage of model development to explore how biases arise from a clinical, data science and linguistic perspective. DESIGN/METHODS: A literature review of the uses of NLP in mental health was carried out across multiple disciplinary databases with defined Mesh terms and keywords. Our primary analysis evaluated biases within 'GloVe' and 'Word2Vec' word embeddings. Euclidean distances were measured to assess relationships between psychiatric terms and demographic labels, and vector similarity functions were used to solve analogy questions relating to mental health. RESULTS: Our primary analysis of mental health terminology in GloVe and Word2Vec embeddings demonstrated significant biases with respect to religion, race, gender, nationality, sexuality and age. Our literature review returned 52 papers, of which none addressed all the areas of possible bias that we identify in model development. In addition, only one article existed on more than one research database, demonstrating the isolation of research within disciplinary silos and inhibiting cross-disciplinary collaboration or communication. CONCLUSION: Our findings are relevant to professionals who wish to minimize the health inequalities that may arise as a result of AI and data-driven algorithms. We offer primary research identifying biases within these technologies and provide recommendations for avoiding these harms in the future.


Subject(s)
Data Science/methods , Health Status Disparities , Mental Health/statistics & numerical data , Natural Language Processing , Psychiatry/methods , Bias , Data Science/statistics & numerical data , Humans , Intersectoral Collaboration , Linguistics , Psychiatry/statistics & numerical data
3.
PLoS One ; 15(2): e0229180, 2020.
Article in English | MEDLINE | ID: mdl-32084181

ABSTRACT

The Supplemental Nutrition Assistance Program (SNAP) is the second-largest and most contentious public assistance program administered by the United States government. The media forums where SNAP discourse occurs have changed with the advent of social and web-based media. We used machine learning techniques to characterize media coverage of SNAP over time (1990-2017), between outlets with national readership and those with narrower scopes, and, for a subset of web-based media, by the outlet's political leaning. We applied structural topic models, a machine learning methodology that categorizes and summarizes large bodies of text that have document-level covariates or metadata, to a corpus of print media retrieved via LexisNexis (n = 76,634). For comparison, we complied a separate corpus via web-scrape algorithm of the Google News API (2012-2017), and assigned political alignment metadata to a subset documents according to a recent study of partisanship on social media. A similar procedure was used on a subset of the print media documents that could be matched to the same alignment index. Using linear regression models, we found some, but not all, topics to vary significantly with time, between large and small media outlets, and by political leaning. Our findings offer insights into the polarized and partisan nature of a major social welfare program in the United States, and the possible effects of new media environments on the state of this discourse.


Subject(s)
Food Assistance , Judgment , Politics , Publications/statistics & numerical data , Humans , Machine Learning , Mass Media/statistics & numerical data , Time Factors
4.
Int J Med Inform ; 137: 104101, 2020 05.
Article in English | MEDLINE | ID: mdl-32088556

ABSTRACT

OBJECTIVE: To develop an algorithm for identifying acronym 'sense' from clinical notes without requiring a clinically annotated training set. MATERIALS AND METHODS: Our algorithm is called CLASSE GATOR: Clinical Acronym SenSE disambiGuATOR. CLASSE GATOR extracts acronyms and definitions from PubMed Central (PMC). A logistic regression model is trained using words associated with specific acronym-definition pairs from PMC. CLASSE GATOR uses this library of acronym-definitions and their corresponding word feature vectors to predict the acronym 'sense' from Beth Israel Deaconess (MIMIC-III) neonatal notes. RESULTS: We identified 1,257 acronyms and 8,287 definitions including a random definition from 31,764 PMC articles on prenatal exposures and 2,227,674 PMC open access articles. The average number of senses (definitions) per acronym was 6.6 (min = 2, max = 50). The average internal 5-fold cross validation was 87.9 % (on PMC). We found 727 unique acronyms (57.29 %) from PMC were present in 105,044 neonatal notes (MIMIC-III). We evaluated the performance of acronym prediction using 245 manually annotated clinical notes with 9 distinct acronyms. CLASSE GATOR achieved an overall accuracy of 63.04 % and outperformed random for 8/9 acronyms (88.89 %) when applied to clinical notes. We also compared our algorithm with UMN's acronym set, and found that CLASSE GATOR outperformed random for 63.46 % of 52 acronyms when using logistic regression, 75.00 % when using Bert and 76.92 % when using BioBert as the prediction algorithm within CLASSE GATOR. CONCLUSIONS: CLASSE GATOR is the first automated acronym sense disambiguation method for clinical notes. Importantly, CLASSE GATOR does not require an expensive manually annotated acronym-definition corpus for training.


Subject(s)
Abbreviations as Topic , Algorithms , Electronic Health Records/statistics & numerical data , Medical Subject Headings/statistics & numerical data , Natural Language Processing , Pattern Recognition, Automated , Humans , Infant, Newborn
5.
J Am Acad Dermatol ; 83(3): 803-808, 2020 Sep.
Article in English | MEDLINE | ID: mdl-31306722

ABSTRACT

BACKGROUND: There is a lack of research studying patient-generated data on Reddit, one of the world's most popular forums with active users interested in dermatology. Techniques within natural language processing, a field of artificial intelligence, can analyze large amounts of text information and extract insights. OBJECTIVE: To apply natural language processing to Reddit comments about dermatology topics to assess for feasibility and potential for insights and engagement. METHODS: A software pipeline preprocessed Reddit comments from 2005 to 2017 from 7 popular dermatology-related subforums on Reddit, applied latent Dirichlet allocation, and used spectral clustering to establish cohesive themes and the frequency of word representation and grouped terms within these topics. RESULTS: We created a corpus of 176,000 comments and identified trends in patient engagement in spaces such as eczema and acne, among others, with a focus on homeopathic treatments and isotretinoin. LIMITATIONS: Latent Dirichlet allocation is an unsupervised model, meaning there is no ground truth to which the model output can be compared. However, because these forums are anonymous, there seems little incentive for patients to be dishonest. CONCLUSIONS: Reddit data has viability and utility for dermatologic research and engagement with the public, especially for common dermatology topics such as tanning, acne, and psoriasis.


Subject(s)
Dermatology/statistics & numerical data , Natural Language Processing , Patient Outcome Assessment , Social Media/statistics & numerical data , Acne Vulgaris/therapy , Cluster Analysis , Feasibility Studies , Humans , Psoriasis/therapy , Reproducibility of Results , Software , Sunbathing
6.
J Biomed Inform ; 69: 86-92, 2017 05.
Article in English | MEDLINE | ID: mdl-28389234

ABSTRACT

Annotating unstructured texts in Electronic Health Records data is usually a necessary step for conducting machine learning research on such datasets. Manual annotation by domain experts provides data of the best quality, but has become increasingly impractical given the rapid increase in the volume of EHR data. In this article, we examine the effectiveness of crowdsourcing with unscreened online workers as an alternative for transforming unstructured texts in EHRs into annotated data that are directly usable in supervised learning models. We find the crowdsourced annotation data to be just as effective as expert data in training a sentence classification model to detect the mentioning of abnormal ear anatomy in radiology reports of audiology. Furthermore, we have discovered that enabling workers to self-report a confidence level associated with each annotation can help researchers pinpoint less-accurate annotations requiring expert scrutiny. Our findings suggest that even crowd workers without specific domain knowledge can contribute effectively to the task of annotating unstructured EHR datasets.


Subject(s)
Crowdsourcing , Data Curation , Electronic Health Records , Audiology , Humans , Radiology
SELECTION OF CITATIONS
SEARCH DETAIL
...