Lessons learned in multilingual grounded language learning

Ákos Kádár, Desmond Elliott, Marc-Alexandre Côté, Grzegorz Chrupala, Afra Alishahi

    10 Citationer (Scopus)

    Abstract

    Recent work has shown how to learn better visual-semantic embeddings by leveraging image descriptions in more than one language. Here, we investigate in detail which conditions affect the performance of this type of grounded language learning model. We show that multilingual training improves over bilingual training, and that low-resource languages benefit from training with higher-resource languages. We demonstrate that a multilingual model can be trained equally well on either translations or comparable sentence pairs, and that annotating the same set of images in multiple language enables further improvements via an additional caption-caption ranking objective.

    OriginalsprogUdefineret/Ukendt
    TitelConference on Computational Natural Language Learning
    Antal sider11
    Publikationsdato2018
    Sider402-412
    StatusUdgivet - 2018

    Citationsformater