Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 2 de 2
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
JAMIA Open ; 7(1): ooae021, 2024 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-38455840

RESUMO

Objective: To automate scientific claim verification using PubMed abstracts. Materials and Methods: We developed CliVER, an end-to-end scientific Claim VERification system that leverages retrieval-augmented techniques to automatically retrieve relevant clinical trial abstracts, extract pertinent sentences, and use the PICO framework to support or refute a scientific claim. We also created an ensemble of three state-of-the-art deep learning models to classify rationale of support, refute, and neutral. We then constructed CoVERt, a new COVID VERification dataset comprising 15 PICO-encoded drug claims accompanied by 96 manually selected and labeled clinical trial abstracts that either support or refute each claim. We used CoVERt and SciFact (a public scientific claim verification dataset) to assess CliVER's performance in predicting labels. Finally, we compared CliVER to clinicians in the verification of 19 claims from 6 disease domains, using 189 648 PubMed abstracts extracted from January 2010 to October 2021. Results: In the evaluation of label prediction accuracy on CoVERt, CliVER achieved a notable F1 score of 0.92, highlighting the efficacy of the retrieval-augmented models. The ensemble model outperforms each individual state-of-the-art model by an absolute increase from 3% to 11% in the F1 score. Moreover, when compared with four clinicians, CliVER achieved a precision of 79.0% for abstract retrieval, 67.4% for sentence selection, and 63.2% for label prediction, respectively. Conclusion: CliVER demonstrates its early potential to automate scientific claim verification using retrieval-augmented strategies to harness the wealth of clinical trial abstracts in PubMed. Future studies are warranted to further test its clinical utility.

2.
Stud Health Technol Inform ; 290: 592-596, 2022 Jun 06.
Artigo em Inglês | MEDLINE | ID: mdl-35673085

RESUMO

Complex interventions are ubiquitous in healthcare. A lack of computational representations and information extraction solutions for complex interventions hinders accurate and efficient evidence synthesis. In this study, we manually annotated and analyzed 3,447 intervention snippets from 261 randomized clinical trial (RCT) abstracts and developed a compositional representation for complex interventions, which captures the spatial, temporal and Boolean relations between intervention components, along with an intervention normalization pipeline that automates three tasks: (i) treatment entity extraction; (ii) intervention component relation extraction; and (iii) attribute extraction and association. 361 intervention snippets from 29 unseen abstracts were included to report on the performance of the evaluation. The average F-measure was 0.74 for treatment entity extraction on an exact match and 0.82 for attribute extraction. The F-measure for relation extraction of multi-component complex interventions was 0.90. 93% of extracted attributes were correctly attributed to corresponding treatment entities.


Assuntos
Armazenamento e Recuperação da Informação , Processamento de Linguagem Natural , Humanos
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...