Abstract
Recent work has shown that visual context improves cross-lingual sense disambiguation for nouns. We extend this line of work to the more challenging task of cross-lingual verb sense disambiguation, introducing the MultiSense dataset of 9,504 images annotated with English, German, and Spanish verbs. Each image in MultiSense is annotated with an English verb and its translation in German or Spanish. We show that cross-lingual verb sense disambiguation models benefit from visual context, compared to unimodal baselines. We also show that the verb sense predicted by our best disambiguation model can improve the results of a text-only machine translation system when used for a multimodal translation task.
Original language | Undefined/Unknown |
---|---|
Title of host publication | Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers) |
Number of pages | 7 |
Place of Publication | Minneapolis, Minnesota |
Publisher | Association for Computational Linguistics (ACL) |
Publication date | 1 Jun 2019 |
Pages | 1998-2004 |
DOIs | |
Publication status | Published - 1 Jun 2019 |