Ladder variational autoencoders

Casper Kaae Sønderby, Tapani Raiko, Lars Maaløe, Søren Kaae Sønderby, Ole Winther

167 Citations (Scopus)

Abstract

Variational autoencoders are powerful models for unsupervised learning. However deep models with several layers of dependent stochastic variables are difficult to train which limits the improvements obtained using these highly expressive models. We propose a new inference model, the Ladder Variational Autoencoder, that recursively corrects the generative distribution by a data dependent approximate likelihood in a process resembling the recently proposed Ladder Network. We show that this model provides state of the art predictive log-likelihood and tighter log-likelihood lower bound compared to the purely bottom-up inference in layered Variational Autoencoders and other generative models. We provide a detailed analysis of the learned hierarchical latent representation and show that our new inference model is qualitatively different and utilizes a deeper more distributed hierarchy of latent variables. Finally, we observe that batch-normalization and deterministic warm-up (gradually turning on the KL-term) are crucial for training variational models with many stochastic layers.

Original languageEnglish
Title of host publicationAdvances in Neural Information Processing Systems 29 (NIPS 2016)
EditorsD. D. Lee, M. Sugiyama, U. V. Luxburg, I. Guyon, R. Garnett
Number of pages9
PublisherCurran Associates, Inc.
Publication date2016
Pages3745-3753
Publication statusPublished - 2016
Event30th Annual Conference on Neural Information Processing Systems - Barcelona, Spain
Duration: 5 Dec 201610 Dec 2016
Conference number: 30

Conference

Conference30th Annual Conference on Neural Information Processing Systems
Number30
Country/TerritorySpain
CityBarcelona
Period05/12/201610/12/2016
SeriesAdvances in Neural Information Processing Systems
Volume29
ISSN1049-5258

Fingerprint

Dive into the research topics of 'Ladder variational autoencoders'. Together they form a unique fingerprint.

Cite this