Abstract
We propose the Assessor-drivenWeighted Averages for Retrieval Evaluation (AWARE) probabilistic framework, a novel methodology for dealing with multiple crowd assessors that may be contradictory and/or noisy. By modeling relevance judgements and crowd assessors as sources of uncertainty, AWARE takes the expectation of a generic performance measure, like Average Precision, composed with these random variables. In this way, it approaches the problem of aggregating different crowd assessors from a new perspective, that is, directly combining the performance measures computed on the ground truth generated by the crowd assessors instead of adopting some classification technique to merge the labels produced by them. We propose several unsupervised estimators that instantiate the AWARE framework and we compare them with state-of-theart approaches, that is,Majoriity Vote and Expectation Maximization, on TREC collections. We found that AWARE approaches improve in terms of their capability of correctly ranking systems and predicting their actual performance scores.
Originalsprog | Engelsk |
---|---|
Artikelnummer | 20 |
Tidsskrift | ACM Transactions on Information Systems |
Vol/bind | 36 |
Udgave nummer | 2 |
ISSN | 1046-8188 |
DOI | |
Status | Udgivet - 1 aug. 2017 |
Udgivet eksternt | Ja |