Abstract
What if Information Retrieval (IR) systems did not just retrieve relevant information that is stored in their indices, but could also "understand" it and synthesise it into a single
document? We present a preliminary study that makes a first step towards answering this question.
Given a query, we train a Recurrent Neural Network (RNN) on existing relevant information to that query. We then use the RNN to "deep learn" a single, synthetic, and we assume, relevant document for that query. We design a crowdsourcing experiment to assess how relevant the "deep learned" document is, compared to existing relevant documents. Users are shown a query and four wordclouds (of three existing relevant documents and our deep learned synthetic document). The synthetic document is ranked on average most relevant of all.
document? We present a preliminary study that makes a first step towards answering this question.
Given a query, we train a Recurrent Neural Network (RNN) on existing relevant information to that query. We then use the RNN to "deep learn" a single, synthetic, and we assume, relevant document for that query. We design a crowdsourcing experiment to assess how relevant the "deep learned" document is, compared to existing relevant documents. Users are shown a query and four wordclouds (of three existing relevant documents and our deep learned synthetic document). The synthetic document is ranked on average most relevant of all.
Original language | English |
---|---|
Publication date | 2016 |
Number of pages | 6 |
Publication status | Published - 2016 |
Event | SIGIR 2016 Workshop on Neural Information Retrieval (Neu-IR) - Pisa, Italy Duration: 21 Jul 2016 → 21 Jul 2016 Conference number: 1 |
Conference
Conference | SIGIR 2016 Workshop on Neural Information Retrieval (Neu-IR) |
---|---|
Number | 1 |
Country/Territory | Italy |
City | Pisa |
Period | 21/07/2016 → 21/07/2016 |