Sequence classification with human attention

Maria Jung Barrett, Joachim Bingel, Nora Hollenstein, Marek Rei, Anders Søgaard

22 Citations (Scopus)
1 Downloads (Pure)

Abstract

Learning attention functions requires large
volumes of data, but many NLP tasks simulate
human behavior, and in this paper, we
show that human attention really does provide
a good inductive bias on many attention
functions in NLP. Specifically, we use
estimated human attention derived from eyetracking
corpora to regularize attention functions
in recurrent neural networks. We show
substantial improvements across a range of
tasks, including sentiment analysis, grammatical
error detection, and detection of abusive
language.
Original languageEnglish
Title of host publicationProceedings of the 22nd Conference on Computational Natural Language Learning (CoNLL 2018)
EditorsAnna Korhonen , Ivan Titov
PublisherAssociation for Computational Linguistics
Publication date2018
Pages302–312
ISBN (Print)978-1-948087-72-8
Publication statusPublished - 2018
Event22nd Conference on Computational Natural Language Learning (CoNLL 2018) - Brussels, Belgium
Duration: 31 Oct 20181 Nov 2018

Conference

Conference22nd Conference on Computational Natural Language Learning (CoNLL 2018)
Country/TerritoryBelgium
CityBrussels
Period31/10/201801/11/2018

Fingerprint

Dive into the research topics of 'Sequence classification with human attention'. Together they form a unique fingerprint.

Cite this