Abstract
What if Information Retrieval (IR) systems did not just retrieve relevant information that is stored in their indices, but could also "understand" it and synthesise it into a single
document? We present a preliminary study that makes a first step towards answering this question.
Given a query, we train a Recurrent Neural Network (RNN) on existing relevant information to that query. We then use the RNN to "deep learn" a single, synthetic, and we assume, relevant document for that query. We design a crowdsourcing experiment to assess how relevant the "deep learned" document is, compared to existing relevant documents. Users are shown a query and four wordclouds (of three existing relevant documents and our deep learned synthetic document). The synthetic document is ranked on average most relevant of all.
document? We present a preliminary study that makes a first step towards answering this question.
Given a query, we train a Recurrent Neural Network (RNN) on existing relevant information to that query. We then use the RNN to "deep learn" a single, synthetic, and we assume, relevant document for that query. We design a crowdsourcing experiment to assess how relevant the "deep learned" document is, compared to existing relevant documents. Users are shown a query and four wordclouds (of three existing relevant documents and our deep learned synthetic document). The synthetic document is ranked on average most relevant of all.
Originalsprog | Engelsk |
---|---|
Publikationsdato | 2016 |
Antal sider | 6 |
Status | Udgivet - 2016 |
Begivenhed | SIGIR 2016 Workshop on Neural Information Retrieval (Neu-IR) - Pisa, Italien Varighed: 21 jul. 2016 → 21 jul. 2016 Konferencens nummer: 1 |
Konference
Konference | SIGIR 2016 Workshop on Neural Information Retrieval (Neu-IR) |
---|---|
Nummer | 1 |
Land/Område | Italien |
By | Pisa |
Periode | 21/07/2016 → 21/07/2016 |