Abstract
Ranking-based Evolution Strategies (ES) are efficient algorithms for problems where gradient-information is not available or when the gradient is not informative. This makes ES interesting for Reinforcement-Learning (RL). However, in RL the high dimensionality of the search-space, as well as the noise of the simulations make direct adaptation of ES challenging. Noise makes ranking points difficult and a large budget of re-evaluations is needed to maintain a bounded error rate. In this work, the ranked weighting is replaced by a linear weighting function, which results in nearly unbiased stochastic gradient descent (SGD) on the manifold of probability distributions. The approach is theoretically analysed and the algorithm is adapted based on the results of the analysis. It is shown that in the limit of infinite dimensions, the algorithm becomes invariant to smooth monotonous transformations of the objective function. Further, drawing on the theory of SGD, an adaptation of the learning-rates based on the noise-level is proposed at the cost of a second evaluation for every sampled point. It is shown empirically that the proposed method improves on simple ES using Cumulative Step-size Adaptation and ranking. Further, it is shown that the proposed algorithm is more noise-resilient than a ranking-based approach.
Original language | English |
---|---|
Title of host publication | GECCO 2019 - Proceedings of the 2019 Genetic and Evolutionary Computation Conference |
Number of pages | 9 |
Publisher | Association for Computing Machinery |
Publication date | 13 Jul 2019 |
Pages | 682-690 |
ISBN (Electronic) | 9781450361118 |
DOIs | |
Publication status | Published - 13 Jul 2019 |
Event | 2019 Genetic and Evolutionary Computation Conference, GECCO 2019 - Prague, Czech Republic Duration: 13 Jul 2019 → 17 Jul 2019 |
Conference
Conference | 2019 Genetic and Evolutionary Computation Conference, GECCO 2019 |
---|---|
Country/Territory | Czech Republic |
City | Prague |
Period | 13/07/2019 → 17/07/2019 |
Sponsor | ACM SIGEVO |
Keywords
- CMA-ES
- Evolution Strategies
- Large-Scale
- Reinforcement-Learning
- Stochastic Optimization