Abstract
Sentiment analysis models often use ratings as labels, assuming that these ratings reflect the sentiment of the accompanying text. We investigate (i) whether human readers can infer ratings from review text, (ii) how human performance compares to a regression model, and (iii) whether model performance is affected by the rating "source" (i.e. original author vs. annotator). We collect IMDb movie reviews with author-provided ratings, and have them re-annotated by crowdsourced and trained annotators. Annotators reproduce the original ratings better than a model, but are still far off in more than 5% of the cases. Models trained on annotator-labels outperform those trained on author-labels, questioning the usefulness of author-rated reviews as training data for sentiment analysis.
Originalsprog | Engelsk |
---|---|
Titel | 2015 Conference on Empirical Methods for Natural Language Processing |
Antal sider | 6 |
Udgivelsessted | Lisbon, Portugal |
Forlag | Association for Computational Linguistics |
Publikationsdato | 2015 |
Sider | 2527-2532 |
ISBN (Trykt) | 978-1-941643-32-7 |
Status | Udgivet - 2015 |