Abstract
This paper presents a challenge to the community:Generative adversarial networks (GANs)can perfectly align independent English wordembeddings induced using the same algorithm,based on distributional informationalone; but fails to do so, for two different embeddingsalgorithms. Why is that? We believeunderstanding why, is key to understand bothmodern word embedding algorithms and thelimitations and instability dynamics of GANs.This paper shows that (a) in all these cases,where alignment fails, there exists a lineartransform between the two embeddings (so algorithmbiases do not lead to non-linear differences),and (b) similar effects can not easilybe obtained by varying hyper-parameters. Oneplausible suggestion based on our initial experimentsis that the differences in the inductivebiases of the embedding algorithms lead toan optimization landscape that is riddled withlocal optima, leading to a very small basin ofconvergence, but we present this more as achallenge paper than a technical contribution.
Originalsprog | Engelsk |
---|---|
Titel | Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing |
Forlag | Association for Computational Linguistics |
Publikationsdato | 2018 |
Sider | 582–586 |
Status | Udgivet - 2018 |
Begivenhed | 2018 Conference on Empirical Methods in Natural Language Processing - Brussels, Belgien Varighed: 31 okt. 2018 → 4 nov. 2018 |
Konference
Konference | 2018 Conference on Empirical Methods in Natural Language Processing |
---|---|
Land/Område | Belgien |
By | Brussels |
Periode | 31/10/2018 → 04/11/2018 |