Parameter sharing between dependency parsers for related languages

Miryam de Lhoneux, Johannes Bjerva, Isabelle Augenstein, Anders Søgaard

    Abstract

    Previous work has suggested that parameter sharing between transition-based neural dependency parsers for related languages can lead to better performance, but there is no consensus on what parameters to share. We present an evaluation of 27 different parameter sharing strategies across 10 languages, representing five pairs of related languages, each pair from a different language family. We find that sharing transition classifier parameters always helps, whereas the usefulness of sharing word and/or character LSTM parameters varies. Based on this result, we propose an architecture where the transition classifier is shared, and the sharing of word and character parameters is controlled by a parameter that can be tuned on validation data. This model is linguistically motivated and obtains significant improvements over a mono-lingually trained baseline. We also find that sharing transition classifier parameters helps when training a parser on unrelated language pairs, but we find that, in the case of unrelated languages, sharing too many parameters does not help.

    Original languageEnglish
    Title of host publicationProceedings of the 2018 Conference on Empirical Methods in Natural Language Processing
    PublisherAssociation for Computational Linguistics
    Publication date2018
    Publication statusPublished - 2018
    Event2018 Conference on Empirical Methods in Natural Language Processing - Brussels, Belgium
    Duration: 31 Oct 20184 Nov 2018

    Conference

    Conference2018 Conference on Empirical Methods in Natural Language Processing
    Country/TerritoryBelgium
    CityBrussels
    Period31/10/201804/11/2018

    Fingerprint

    Dive into the research topics of 'Parameter sharing between dependency parsers for related languages'. Together they form a unique fingerprint.

    Cite this