Learning Language-Independent Representations of Verbs and Adjectives from Multimodal Retrieval

Abstract

This paper presents a simple modification to previous work on learning cross-lingual, grounded word representations from image-word pairs that, unlike previous work, is robust across different parts of speech, e.g., able to find the translation of the adjective 'social' relying only on image features associated with its translation candidates. Our method does not rely on black-box image search engines or any direct cross-lingual supervision. We evaluate our approach on English-German and English-Japanese word alignment, as well as on existing English-German bilingual dictionary induction datasets.

OriginalsprogEngelsk
TitelProceedings - 14th International Conference on Signal-Image Technology and Internet Based Systems, SITIS
Antal sider8
ForlagIEEE
Publikationsdato2 jul. 2018
Sider427-434
ISBN (Elektronisk)978-1-5386-9385-8
DOI
StatusUdgivet - 2 jul. 2018
Begivenhed14th International Conference on Signal Image Technology and Internet Based Systems, SITIS 2018 - Las Palmas de Gran Canaria, Spanien
Varighed: 26 nov. 201829 nov. 2018

Konference

Konference14th International Conference on Signal Image Technology and Internet Based Systems, SITIS 2018
Land/OmrådeSpanien
ByLas Palmas de Gran Canaria
Periode26/11/201829/11/2018

Fingeraftryk

Dyk ned i forskningsemnerne om 'Learning Language-Independent Representations of Verbs and Adjectives from Multimodal Retrieval'. Sammen danner de et unikt fingeraftryk.

Citationsformater