Abstract
This paper describes the winning approach used by the Copenhagen team in the CLEF-2019 CheckThat! lab. Given a political debate or speech, the aim is to predict which sentences should be prioritized for fact-checking by creating a ranked list of sentences. While many approaches for check-worthiness exist, we are the first to directly optimize the sentence ranking as all previous work has solely used standard classification based loss functions. We present a recurrent neural network model that learns a sentence encoding, from which a check-worthiness score is predicted. The model is trained by jointly optimizing a binary cross entropy loss, as well as a ranking based pairwise hinge loss. We obtain sentence pairs for training through contrastive sampling, where for each sentence we find the k most semantically similar sentences with opposite label. To increase the generalizability of the model, we utilize weak supervision by using an existing check-worthiness approach to weakly label a large unlabeled dataset. We experimentally show that both weak supervision and the ranking component improve the results individually (MAP increases of 25% and 9% respectively), while when used together improve the results even more (39% increase). Through a comparison to existing state-of-the-art check-worthiness methods, we find that our approach improves the MAP score by 11%.
Originalsprog | Engelsk |
---|---|
Tidsskrift | CEUR Workshop Proceedings |
Vol/bind | 2380 |
Antal sider | 8 |
ISSN | 1613-0073 |
Status | Udgivet - 2019 |
Begivenhed | 20th Working Notes of CLEF Conference and Labs of the Evaluation Forum, CLEF 2019 - Lugano, Schweiz Varighed: 9 sep. 2019 → 12 sep. 2019 |
Konference
Konference | 20th Working Notes of CLEF Conference and Labs of the Evaluation Forum, CLEF 2019 |
---|---|
Land/Område | Schweiz |
By | Lugano |
Periode | 09/09/2019 → 12/09/2019 |