Abstract
This paper presents a challenge to the community:Generative adversarial networks (GANs)can perfectly align independent English wordembeddings induced using the same algorithm,based on distributional informationalone; but fails to do so, for two different embeddingsalgorithms. Why is that? We believeunderstanding why, is key to understand bothmodern word embedding algorithms and thelimitations and instability dynamics of GANs.This paper shows that (a) in all these cases,where alignment fails, there exists a lineartransform between the two embeddings (so algorithmbiases do not lead to non-linear differences),and (b) similar effects can not easilybe obtained by varying hyper-parameters. Oneplausible suggestion based on our initial experimentsis that the differences in the inductivebiases of the embedding algorithms lead toan optimization landscape that is riddled withlocal optima, leading to a very small basin ofconvergence, but we present this more as achallenge paper than a technical contribution.
Original language | English |
---|---|
Title of host publication | Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing |
Publisher | Association for Computational Linguistics |
Publication date | 2018 |
Pages | 582–586 |
Publication status | Published - 2018 |
Event | 2018 Conference on Empirical Methods in Natural Language Processing - Brussels, Belgium Duration: 31 Oct 2018 → 4 Nov 2018 |
Conference
Conference | 2018 Conference on Empirical Methods in Natural Language Processing |
---|---|
Country/Territory | Belgium |
City | Brussels |
Period | 31/10/2018 → 04/11/2018 |