Abstract
Learning attention functions requires large
volumes of data, but many NLP tasks simulate
human behavior, and in this paper, we
show that human attention really does provide
a good inductive bias on many attention
functions in NLP. Specifically, we use
estimated human attention derived from eyetracking
corpora to regularize attention functions
in recurrent neural networks. We show
substantial improvements across a range of
tasks, including sentiment analysis, grammatical
error detection, and detection of abusive
language.
volumes of data, but many NLP tasks simulate
human behavior, and in this paper, we
show that human attention really does provide
a good inductive bias on many attention
functions in NLP. Specifically, we use
estimated human attention derived from eyetracking
corpora to regularize attention functions
in recurrent neural networks. We show
substantial improvements across a range of
tasks, including sentiment analysis, grammatical
error detection, and detection of abusive
language.
Originalsprog | Engelsk |
---|---|
Titel | Proceedings of the 22nd Conference on Computational Natural Language Learning (CoNLL 2018) |
Redaktører | Anna Korhonen , Ivan Titov |
Forlag | Association for Computational Linguistics |
Publikationsdato | 2018 |
Sider | 302–312 |
ISBN (Trykt) | 978-1-948087-72-8 |
Status | Udgivet - 2018 |
Begivenhed | 22nd Conference on Computational Natural Language Learning (CoNLL 2018) - Brussels, Belgien Varighed: 31 okt. 2018 → 1 nov. 2018 |
Konference
Konference | 22nd Conference on Computational Natural Language Learning (CoNLL 2018) |
---|---|
Land/Område | Belgien |
By | Brussels |
Periode | 31/10/2018 → 01/11/2018 |