Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 3 de 3
Filter
Add more filters










Database
Language
Publication year range
1.
J Am Med Inform Assoc ; 26(6): 580-581, 2019 06 01.
Article in English | MEDLINE | ID: mdl-30980667

ABSTRACT

We appreciate the detailed review provided by Magge et al1 of our article, "Deep learning for pharmacovigilance: recurrent neural network architectures for labeling adverse drug reactions in Twitter posts." 2 In their letter, they present a subjective criticism that rests on concerns about our dataset composition and potential misinterpretation of comparisons to existing methods. Our article underwent two rounds of extensive peer review and has been cited 28 times1 in the nearly 2 years since it was published online (February 2017). Neither the reviewers nor the citing authors raised similar concerns. There are, however, portions of the commentary that highlight areas of our work that would benefit from further clarification.


Subject(s)
Deep Learning , Drug-Related Side Effects and Adverse Reactions , Social Media , Humans , Neural Networks, Computer , Pharmacovigilance
2.
J Biomed Inform ; 69: 86-92, 2017 05.
Article in English | MEDLINE | ID: mdl-28389234

ABSTRACT

Annotating unstructured texts in Electronic Health Records data is usually a necessary step for conducting machine learning research on such datasets. Manual annotation by domain experts provides data of the best quality, but has become increasingly impractical given the rapid increase in the volume of EHR data. In this article, we examine the effectiveness of crowdsourcing with unscreened online workers as an alternative for transforming unstructured texts in EHRs into annotated data that are directly usable in supervised learning models. We find the crowdsourced annotation data to be just as effective as expert data in training a sentence classification model to detect the mentioning of abnormal ear anatomy in radiology reports of audiology. Furthermore, we have discovered that enabling workers to self-report a confidence level associated with each annotation can help researchers pinpoint less-accurate annotations requiring expert scrutiny. Our findings suggest that even crowd workers without specific domain knowledge can contribute effectively to the task of annotating unstructured EHR datasets.


Subject(s)
Crowdsourcing , Data Curation , Electronic Health Records , Audiology , Humans , Radiology
3.
J Am Med Inform Assoc ; 24(4): 813-821, 2017 Jul 01.
Article in English | MEDLINE | ID: mdl-28339747

ABSTRACT

OBJECTIVE: Social media is an important pharmacovigilance data source for adverse drug reaction (ADR) identification. Human review of social media data is infeasible due to data quantity, thus natural language processing techniques are necessary. Social media includes informal vocabulary and irregular grammar, which challenge natural language processing methods. Our objective is to develop a scalable, deep-learning approach that exceeds state-of-the-art ADR detection performance in social media. MATERIALS AND METHODS: We developed a recurrent neural network (RNN) model that labels words in an input sequence with ADR membership tags. The only input features are word-embedding vectors, which can be formed through task-independent pretraining or during ADR detection training. RESULTS: Our best-performing RNN model used pretrained word embeddings created from a large, non-domain-specific Twitter dataset. It achieved an approximate match F-measure of 0.755 for ADR identification on the dataset, compared to 0.631 for a baseline lexicon system and 0.65 for the state-of-the-art conditional random field model. Feature analysis indicated that semantic information in pretrained word embeddings boosted sensitivity and, combined with contextual awareness captured in the RNN, precision. DISCUSSION: Our model required no task-specific feature engineering, suggesting generalizability to additional sequence-labeling tasks. Learning curve analysis showed that our model reached optimal performance with fewer training examples than the other models. CONCLUSION: ADR detection performance in social media is significantly improved by using a contextually aware model and word embeddings formed from large, unlabeled datasets. The approach reduces manual data-labeling requirements and is scalable to large social media datasets.


Subject(s)
Drug-Related Side Effects and Adverse Reactions/diagnosis , Natural Language Processing , Neural Networks, Computer , Pharmacovigilance , Social Media , Humans , Machine Learning
SELECTION OF CITATIONS
SEARCH DETAIL
...