Abstract
Restricted Boltzmann machines (RBMs) are probabilistic graphical models that can also be interpreted as stochastic neural networks. Training RBMs is known to be challenging. Computing the likelihood of the model parameters or its gradient is in general computationally intensive. Thus, training relies on sampling based approximations of the log-likelihood gradient.
I will present an empirical and theoretical analysis of the bias of these approximations and show that the approximation error can lead to a distortion of the learning process. The bias decreases with increasing mixing rate of the applied sampling procedure and I will introduce a transition operator that leads to faster mixing. Finally, a different parametrisation of RBMs will be discussed that leads to better learning results and more robustness against changes in the data representation.
I will present an empirical and theoretical analysis of the bias of these approximations and show that the approximation error can lead to a distortion of the learning process. The bias decreases with increasing mixing rate of the applied sampling procedure and I will introduce a transition operator that leads to faster mixing. Finally, a different parametrisation of RBMs will be discussed that leads to better learning results and more robustness against changes in the data representation.
Original language | English |
---|
Publisher | Department of Computer Science, Faculty of Science, University of Copenhagen |
---|---|
Number of pages | 212 |
Publication status | Published - 2014 |