Abstract
We present an overview of the 2nd edition of the CheckThat! Lab, part of CLEF 2019, with focus on Task 1: Check-Worthiness in political debates. The task asks to predict which claims in a political debate should be prioritized for fact-checking. In particular, given a debate or a political speech, the goal is to produce a ranked list of its sentences based on their worthiness for fact-checking. This year, we extended the 2018 dataset with 16 more debates and speeches. A total of 47 teams registered to participate in the lab, and eleven of them actually submitted runs for Task 1 (compared to seven last year). The evaluation results show that the most successful approaches to Task 1 used various neural networks and logistic regression. The best system achieved mean average precision of 0.166 (0.250 on the speeches, and 0.054 on the debates). This leaves large room for improvement, and thus we release all datasets and scoring scripts, which should enable further research in check-worthiness estimation.
Original language | English |
---|---|
Journal | CEUR Workshop Proceedings |
Volume | 2380 |
ISSN | 1613-0073 |
Publication status | Published - 2019 |
Event | 20th Working Notes of CLEF Conference and Labs of the Evaluation Forum, CLEF 2019 - Lugano, Switzerland Duration: 9 Sept 2019 → 12 Sept 2019 |
Conference
Conference | 20th Working Notes of CLEF Conference and Labs of the Evaluation Forum, CLEF 2019 |
---|---|
Country/Territory | Switzerland |
City | Lugano |
Period | 09/09/2019 → 12/09/2019 |
Keywords
- Check-worthiness estimation
- Computational journalism
- Fact-checking
- Veracity