Multileaving for online evaluation of rankers

Brian Brost*

*Corresponding author for this work

Abstract

In online learning to rank we are faced with a tradeoff between exploring new, potentially superior rankers, and exploiting our preexisting knowledge of what rankers have performed well in the past. Multileaving methods offer an attractive approach to this problem since they can efficiently use online feedback to simultaneously evaluate a potentially arbitrary number of rankers. In this talk we discuss some of the main challenges in multileaving, and discuss promising areas for future research.

Original languageEnglish
Title of host publicationProceedings of the 1st International Workshop on LEARning Next gEneration Rankers co-located with the 3rd ACM International Conference on the Theory of Information Retrieval (ICTIR 2017)
EditorsNicola Ferro, Claudio Lucchese, Maria Maistro, Raffaele Perego
Number of pages2
PublisherCEUR-WS.org
Publication date2017
Publication statusPublished - 2017
Event1st International Workshop on LEARning Next gEneration Rankers - Amsterdam, Netherlands
Duration: 1 Oct 20171 Oct 2017
Conference number: 1

Workshop

Workshop1st International Workshop on LEARning Next gEneration Rankers
Number1
Country/TerritoryNetherlands
CityAmsterdam
Period01/10/201701/10/2017
SeriesCEUR Workshop Proceedings
Volume2007
ISSN1613-0073

Fingerprint

Dive into the research topics of 'Multileaving for online evaluation of rankers'. Together they form a unique fingerprint.

Cite this