Abstract
Online ranker evaluation is a key challenge in information retrieval. An important task in the online evaluation of rankers is using implicit user feedback for inferring preferences between rankers. Interleaving methods have been found to be efficient and sensitive, i.e. they can quickly detect even small differences in quality. It has recently been shown that multileaving methods exhibit similar sensitivity but can be more efficient than interleaving methods. This paper presents empirical results demonstrating that existing multileaving methods either do not scale well with the number of rankers, or, more problematically, can produce results which substantially differ from evaluation measures like NDCG. The latter problem is caused by the fact that they do not correctly account for the similarities that can occur between rankers being multileaved. We propose a new multileaving method for handling this problem and demonstrate that it substantially outperforms existing methods, in some cases reducing errors by as much as 50%.
Original language | English |
---|---|
Title of host publication | Proceedings of the 39th International ACM SIGIR conference on Research and Development in Information Retrieval : SIGIR '16 |
Number of pages | 4 |
Publisher | Association for Computing Machinery |
Publication date | 7 Jul 2016 |
Pages | 745-748 |
ISBN (Print) | 978-1-4503-4069-4 |
DOIs | |
Publication status | Published - 7 Jul 2016 |
Event | International ACM SIGIR conference on Research and Development in Information Retrieval 2016: SIGIR '16 - Pisa, Italy Duration: 17 Jul 2016 → 21 Jul 2016 Conference number: 39 http://sigir.org/sigir2016/ |
Conference
Conference | International ACM SIGIR conference on Research and Development in Information Retrieval 2016 |
---|---|
Number | 39 |
Country/Territory | Italy |
City | Pisa |
Period | 17/07/2016 → 21/07/2016 |
Internet address |