Abstract
Online ranker evaluation focuses on the challenge of efficiently determining, from implicit user feedback, which ranker out of a finite set of rankers is the best. It can be modeled by dueling bandits, a mathematical model for online learning under limited feedback from pairwise comparisons. Comparisons of pairs of rankers is performed by interleaving their result sets and examining which documents users click on. The dueling bandits model addresses the key issue of which pair of rankers to compare at each iteration. Methods for simultaneously comparing more than two rankers have recently been developed. However, the question of which rankers to compare at each iteration was left open. We address this question by proposing a generalization of the dueling bandits model that uses simultaneous comparisons of an unrestricted number of rankers. We evaluate our algorithm on standard large-scale online ranker evaluation datasets. Our experimentals show that the algorithm yields orders of magnitude gains in performance compared to state-of-the-art dueling bandit algorithms.
Original language | Undefined/Unknown |
---|---|
Title of host publication | Proceedings of the 25th ACM International Conference on Information and Knowledge Management |
Number of pages | 6 |
Publisher | Association for Computing Machinery |
Publication date | 24 Oct 2016 |
Pages | 2161-2166 |
ISBN (Electronic) | 978-1-4503-4073-1 |
DOIs | |
Publication status | Published - 24 Oct 2016 |
Event | 25th ACM International Conference on Information and Knowledge Management - Indianapolis, United States Duration: 24 Oct 2016 → 28 Oct 2016 Conference number: 25 |
Conference
Conference | 25th ACM International Conference on Information and Knowledge Management |
---|---|
Number | 25 |
Country/Territory | United States |
City | Indianapolis |
Period | 24/10/2016 → 28/10/2016 |
Series | ACM International Conference on Information and Knowledge Management |
---|