Your browser doesn't support javascript.
BeaSku at CheckThat! 2021: Fine-tuning sentence BERT with triplet loss and limited data
2021 Working Notes of CLEF - Conference and Labs of the Evaluation Forum, CLEF-WN 2021 ; 2936:639-647, 2021.
Article in English | Scopus | ID: covidwho-1391319
ABSTRACT
Misinformation and disinformation are growing problems online. The negative consequences of the proliferation of false claims became especially apparent during the COVID-19 pandemic. Thus, there is a need to detect and to track false claims. However, this is a slow and time-consuming process, especially when done manually. At the same time, the same claims, with some small variations, spread simultaneously across many accounts and even on different platforms. One promising approach is to develop systems for detecting new instances of claims that have been previously fact-checked online, as in the CLEF-2021 CheckThat! Lab Task-2b. Here we describe our system for this task. We fine-tuned sentence BERT using triplet loss, and we experimented with two types of augmented datasets. We further combined BM25 scores with language model similarity scores as features in a reranker. The official evaluation results have put our BeaSku system at the second place. © 2021 Copyright for this paper by its authors. Use permitted under Creative Commons License Attribution 4.0 International (CC BY 4.0).
Search on Google
Collection: Databases of international organizations Database: Scopus Language: English Journal: 2021 Working Notes of CLEF - Conference and Labs of the Evaluation Forum, CLEF-WN 2021 Year: 2021 Document Type: Article

Similar

MEDLINE

...
LILACS

LIS

Search on Google
Collection: Databases of international organizations Database: Scopus Language: English Journal: 2021 Working Notes of CLEF - Conference and Labs of the Evaluation Forum, CLEF-WN 2021 Year: 2021 Document Type: Article