When does deep multi-task learning work for loosely related document classification tasks?

Emma Kerinec, Anders Søgaard, Chloé Braud

    Abstract

    This work aims to contribute to our understandingof when multi-task learning throughparameter sharing in deep neural networksleads to improvements over single-task learning.We focus on the setting of learning fromloosely related tasks, for which no theoreticalguarantees exist. We therefore approach thequestion empirically, studying which propertiesof datasets and single-task learning characteristicscorrelate with improvements frommulti-task learning. We are the first to studythis in a text classification setting and acrossmore than 500 different task pairs.
    OriginalsprogEngelsk
    TitelProceedings of the 2018 EMNLP Workshop BlackboxNLP : Analyzing and Interpreting Neural Networks for NLP
    ForlagAssociation for Computational Linguistics
    Publikationsdato2018
    Sider1-8
    StatusUdgivet - 2018
    Begivenhed2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP - Brussels, Belgien
    Varighed: 1 nov. 20181 nov. 2018

    Workshop

    Workshop2018 EMNLP Workshop BlackboxNLP
    Land/OmrådeBelgien
    ByBrussels
    Periode01/11/201801/11/2018

    Citationsformater