Abstract
Deep belief networks (DBNs) can approximate any distribution over fixed-length binary vectors. However, DBNs are frequently applied to model real-valued data, and so far little is known about their representational power in this case. We analyze the approximation properties of DBNs with two layers of binary hidden units and visible units with conditional distributions from the exponential family. It is shown that these DBNs can, under mild assumptions, model any additive mixture of distributions from the exponential family with independent variables. An arbitrarily good approximation in terms of Kullback-Leibler divergence of an m-dimensional mixture distribution with n components can be achieved by a DBN with m visible variables and n and n + 1 hidden variables in the first and second hidden layer, respectively. Furthermore, relevant infinite mixtures can be approximated arbitrarily well by a DBN with a finite number of neurons. This includes the important special case of an infinite mixture of Gaussian distributions with fixed variance restricted to a compact domain, which in turn can approximate any strictly positive density over this domain.
Originalsprog | Engelsk |
---|---|
Titel | Proceedings of the 30th International Conference on Machine Learning |
Redaktører | Sanjoy Dasgupta, David McAllester |
Antal sider | 8 |
Publikationsdato | 2013 |
Sider | 419-426 |
Status | Udgivet - 2013 |
Begivenhed | 30th International Conference on Machine Learning - Atlanta, USA Varighed: 16 jun. 2013 → 21 jun. 2013 Konferencens nummer: 30 |
Konference
Konference | 30th International Conference on Machine Learning |
---|---|
Nummer | 30 |
Land/Område | USA |
By | Atlanta |
Periode | 16/06/2013 → 21/06/2013 |
Navn | JMLR: Workshop and Conference Proceedings |
---|---|
Vol/bind | 28 |