Selecting a subset of queries for acquisition of further relevance judgements

Mehdi Hosseini, Ingemar J Cox, Natasa Milic-Frayling, Vishwa Vinay, Trevor Sweeting

12 Citationer (Scopus)

Abstract

Assessing the relative performance of search systems requires the use of a test collection with a pre-defined set of queries and corresponding relevance assessments. The state-of-the-art process of constructing test collections involves using a large number of queries and selecting a set of documents, submitted by a group of participating systems, to be judged per query. However, the initial set of judgments may be insufficient to reliably evaluate the performance of future as yet unseen systems. In this paper, we propose a method that expands the set of relevance judgments as new systems are being evaluated. We assume that there is a limited budget to build additional relevance judgements. From the documents retrieved by the new systems we create a pool of unjudged documents. Rather than uniformly distributing the budget across all queries, we first select a subset of queries that are effective in evaluating systems and then uniformly allocate the budget only across these queries. Experimental results on TREC 2004 Robust track test collection demonstrate the superiority of this budget allocation strategy.

OriginalsprogUdefineret/Ukendt
TitelAdvances in Information Retrieval Theory
Antal sider12
ForlagSpringer Science+Business Media
Publikationsdato2011
Sider113-124
StatusUdgivet - 2011
Udgivet eksterntJa

Citationsformater