Abstract
The creation of a labelled dataset for Information Retrieval (IR) purposes is a costly process. For this reason, a mix of crowdsourcing and active learning approaches have been proposed in the literature in order to assess the relevance of documents of a collection given a particular query at an affordable cost. In this paper, we present the design of the gamification of this interactive process that draws inspiration from recent works in the area of gamification for IR. In particular, we focus on three main points: i) we want to create a set of relevance judgements with the least effort by human assessors, ii) we use interactive search interfaces that use game mechanics, iii) we use Natural Language Processing (NLP) to collect different aspects of a query.
Original language | English |
---|---|
Journal | CEUR Workshop Proceedings |
Volume | 1749 |
ISSN | 1613-0073 |
Publication status | Published - 1 Jan 2016 |
Externally published | Yes |
Event | 3rd Italian Conference on Computational Linguistics, CLiC-it 2016 and 5th Evaluation Campaign of Natural Language Processing and Speech Tools for Italian, EVALITA 2016 - Napoli, Italy Duration: 5 Dec 2016 → 7 Dec 2016 |
Conference
Conference | 3rd Italian Conference on Computational Linguistics, CLiC-it 2016 and 5th Evaluation Campaign of Natural Language Processing and Speech Tools for Italian, EVALITA 2016 |
---|---|
Country/Territory | Italy |
City | Napoli |
Period | 05/12/2016 → 07/12/2016 |