Why is unsupervised alignment of English embeddings from different algorithms so hard?

    Abstract

    This paper presents a challenge to the community:Generative adversarial networks (GANs)can perfectly align independent English wordembeddings induced using the same algorithm,based on distributional informationalone; but fails to do so, for two different embeddingsalgorithms. Why is that? We believeunderstanding why, is key to understand bothmodern word embedding algorithms and thelimitations and instability dynamics of GANs.This paper shows that (a) in all these cases,where alignment fails, there exists a lineartransform between the two embeddings (so algorithmbiases do not lead to non-linear differences),and (b) similar effects can not easilybe obtained by varying hyper-parameters. Oneplausible suggestion based on our initial experimentsis that the differences in the inductivebiases of the embedding algorithms lead toan optimization landscape that is riddled withlocal optima, leading to a very small basin ofconvergence, but we present this more as achallenge paper than a technical contribution.
    Original languageEnglish
    Title of host publicationProceedings of the 2018 Conference on Empirical Methods in Natural Language Processing
    PublisherAssociation for Computational Linguistics
    Publication date2018
    Pages582–586
    Publication statusPublished - 2018
    Event2018 Conference on Empirical Methods in Natural Language Processing - Brussels, Belgium
    Duration: 31 Oct 20184 Nov 2018

    Conference

    Conference2018 Conference on Empirical Methods in Natural Language Processing
    Country/TerritoryBelgium
    CityBrussels
    Period31/10/201804/11/2018

    Cite this