Abstract
Learning attention functions requires large
volumes of data, but many NLP tasks simulate
human behavior, and in this paper, we
show that human attention really does provide
a good inductive bias on many attention
functions in NLP. Specifically, we use
estimated human attention derived from eyetracking
corpora to regularize attention functions
in recurrent neural networks. We show
substantial improvements across a range of
tasks, including sentiment analysis, grammatical
error detection, and detection of abusive
language.
volumes of data, but many NLP tasks simulate
human behavior, and in this paper, we
show that human attention really does provide
a good inductive bias on many attention
functions in NLP. Specifically, we use
estimated human attention derived from eyetracking
corpora to regularize attention functions
in recurrent neural networks. We show
substantial improvements across a range of
tasks, including sentiment analysis, grammatical
error detection, and detection of abusive
language.
Original language | English |
---|---|
Title of host publication | Proceedings of the 22nd Conference on Computational Natural Language Learning (CoNLL 2018) |
Editors | Anna Korhonen , Ivan Titov |
Publisher | Association for Computational Linguistics |
Publication date | 2018 |
Pages | 302–312 |
ISBN (Print) | 978-1-948087-72-8 |
Publication status | Published - 2018 |
Event | 22nd Conference on Computational Natural Language Learning (CoNLL 2018) - Brussels, Belgium Duration: 31 Oct 2018 → 1 Nov 2018 |
Conference
Conference | 22nd Conference on Computational Natural Language Learning (CoNLL 2018) |
---|---|
Country/Territory | Belgium |
City | Brussels |
Period | 31/10/2018 → 01/11/2018 |