Neural weakly supervised fact check-worthiness detection with contrastive sampling-based ranking loss

    2 Citations (Scopus)

    Abstract

    This paper describes the winning approach used by the Copenhagen team in the CLEF-2019 CheckThat! lab. Given a political debate or speech, the aim is to predict which sentences should be prioritized for fact-checking by creating a ranked list of sentences. While many approaches for check-worthiness exist, we are the first to directly optimize the sentence ranking as all previous work has solely used standard classification based loss functions. We present a recurrent neural network model that learns a sentence encoding, from which a check-worthiness score is predicted. The model is trained by jointly optimizing a binary cross entropy loss, as well as a ranking based pairwise hinge loss. We obtain sentence pairs for training through contrastive sampling, where for each sentence we find the k most semantically similar sentences with opposite label. To increase the generalizability of the model, we utilize weak supervision by using an existing check-worthiness approach to weakly label a large unlabeled dataset. We experimentally show that both weak supervision and the ranking component improve the results individually (MAP increases of 25% and 9% respectively), while when used together improve the results even more (39% increase). Through a comparison to existing state-of-the-art check-worthiness methods, we find that our approach improves the MAP score by 11%.

    Original languageEnglish
    JournalCEUR Workshop Proceedings
    Volume2380
    Number of pages8
    ISSN1613-0073
    Publication statusPublished - 2019
    Event20th Working Notes of CLEF Conference and Labs of the Evaluation Forum, CLEF 2019 - Lugano, Switzerland
    Duration: 9 Sept 201912 Sept 2019

    Conference

    Conference20th Working Notes of CLEF Conference and Labs of the Evaluation Forum, CLEF 2019
    Country/TerritorySwitzerland
    CityLugano
    Period09/09/201912/09/2019

    Keywords

    • Contrastive ranking
    • Fact check-worthiness
    • Neural networks

    Fingerprint

    Dive into the research topics of 'Neural weakly supervised fact check-worthiness detection with contrastive sampling-based ranking loss'. Together they form a unique fingerprint.

    Cite this