Abstract
The resilient backpropagation (Rprop) algorithms are fast and accurate batch learning methods for neural networks. We describe their implementation in the popular machine learning framework TensorFlow. We present the first empirical evaluation of Rprop for training recurrent neural networks with gated recurrent units. In our experiments, Rprop with default hyperparameters outperformed vanilla steepest descent as well as the optimization algorithms RMSprop and Adam even if their hyperparameters were tuned.
Originalsprog | Engelsk |
---|---|
Publikationsdato | 2018 |
Antal sider | 5 |
Status | Udgivet - 2018 |
Begivenhed | International Conference on Learning Representations: Workshop - Vancouver, Canada Varighed: 30 apr. 2018 → 3 maj 2018 Konferencens nummer: 6 https://iclr.cc/ |
Workshop
Workshop | International Conference on Learning Representations |
---|---|
Nummer | 6 |
Land/Område | Canada |
By | Vancouver |
Periode | 30/04/2018 → 03/05/2018 |
Internetadresse |