Abstract
The resilient backpropagation (Rprop) algorithms are fast and accurate batch learning methods for neural networks. We describe their implementation in the popular machine learning framework TensorFlow. We present the first empirical evaluation of Rprop for training recurrent neural networks with gated recurrent units. In our experiments, Rprop with default hyperparameters outperformed vanilla steepest descent as well as the optimization algorithms RMSprop and Adam even if their hyperparameters were tuned.
Original language | English |
---|---|
Publication date | 2018 |
Number of pages | 5 |
Publication status | Published - 2018 |
Event | International Conference on Learning Representations: Workshop - Vancouver, Canada Duration: 30 Apr 2018 → 3 May 2018 Conference number: 6 https://iclr.cc/ |
Workshop
Workshop | International Conference on Learning Representations |
---|---|
Number | 6 |
Country/Territory | Canada |
City | Vancouver |
Period | 30/04/2018 → 03/05/2018 |
Internet address |