Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 2 de 2
Filter
Add more filters










Database
Language
Publication year range
1.
Proc Conf Assoc Comput Linguist Meet ; 2023: 2680-2697, 2023 Jul.
Article in English | MEDLINE | ID: mdl-38770277

ABSTRACT

Two-step approaches, in which summary candidates are generated-then-reranked to return a single summary, can improve ROUGE scores over the standard single-step approach. Yet, standard decoding methods (i.e., beam search, nucleus sampling, and diverse beam search) produce candidates with redundant, and often low quality, content. In this paper, we design a novel method to generate candidates for re-ranking that addresses these issues. We ground each candidate abstract on its own unique content plan and generate distinct plan-guided abstracts using a model's top beam. More concretely, a standard language model (a BART LM) auto-regressively generates elemental discourse unit (EDU) content plans with an extractive copy mechanism. The top K beams from the content plan generator are then used to guide a separate LM, which produces a single abstractive candidate for each distinct plan. We apply an existing re-ranker (BRIO) to abstractive candidates generated from our method, as well as baseline decoding methods. We show large relevance improvements over previously published methods on widely used single document news article corpora, with ROUGE-2 F1 gains of 0.88, 2.01, and 0.38 on CNN / Dailymail, NYT, and Xsum, respectively. A human evaluation on CNN / DM validates these results. Similarly, on 1k samples from CNN / DM, we show that prompting GPT-3 to follow EDU plans outperforms sampling-based methods by 1.05 ROUGE-2 F1 points. Code to generate and realize plans is available at https://github.com/griff4692/edu-sum.

2.
Radiographics ; 41(5): 1446-1453, 2021.
Article in English | MEDLINE | ID: mdl-34469212

ABSTRACT

Natural language processing (NLP) is the subset of artificial intelligence focused on the computer interpretation of human language. It is an invaluable tool in the analysis, aggregation, and simplification of free text. It has already demonstrated significant potential in the analysis of radiology reports. There are abundant open-source libraries and tools available that facilitate its application to the benefit of radiology. Radiologists who understand its limitations and potential will be better positioned to evaluate NLP models, understand how they can improve clinical workflow, and facilitate research endeavors involving large amounts of human language. The advent of increasingly affordable and powerful computer processing, the large quantities of medical and radiologic data, and advances in machine learning algorithms have contributed to the large potential of NLP. In turn, radiology has significant potential to benefit from the ability of NLP to convert relatively standardized radiology reports to machine-readable data. NLP benefits from standardized reporting, but because of its ability to interpret free text by using context clues, NLP does not necessarily depend on it. An overview and practical approach to NLP is featured, with specific emphasis on its applications to radiology. A brief history of NLP, the strengths and challenges inherent to its use, and freely available resources and tools are covered to guide further exploration and study within the field. Particular attention is devoted to the recent development of the Word2Vec and BERT (Bidirectional Encoder Representations from Transformers) language models, which have exponentially increased the power and utility of NLP for a variety of applications. Online supplemental material is available for this article. ©RSNA, 2021.


Subject(s)
Natural Language Processing , Radiology , Artificial Intelligence , Humans , Machine Learning , Radiography
SELECTION OF CITATIONS
SEARCH DETAIL
...