Multi-view and multi-task training of RST discourse parsers

Chloé Elodie Braud, Barbara Plank, Anders Søgaard

20 Citationer (Scopus)
51 Downloads (Pure)

Abstract

We experiment with different ways of training LSTM networks to predict RST discourse trees. The main challenge for RST discourse parsing is the limited amounts of training data. We combat this by regularizing our models using task supervision from related tasks as well as alternative views on discourse structures. We show that a simple LSTM sequential discourse parser takes advantage of this multi-view and multi-task framework with 12-15% error reductions over our baseline (depending on the metric) and results that rival more complex state-of-the-art parsers.

OriginalsprogEngelsk
TitelThe 26th International Conference on Computational Linguistics : proceedings of COLING 2016: technical Papers
Antal sider11
Publikationsdato2016
Sider1903-1913
ISBN (Elektronisk)978-4-87974-702-0
StatusUdgivet - 2016
BegivenhedThe 26th International Conference on Computational Linguistics - Osaka, Japan
Varighed: 11 dec. 201616 dec. 2016
Konferencens nummer: 26

Konference

KonferenceThe 26th International Conference on Computational Linguistics
Nummer26
Land/OmrådeJapan
ByOsaka
Periode11/12/201616/12/2016

Fingeraftryk

Dyk ned i forskningsemnerne om 'Multi-view and multi-task training of RST discourse parsers'. Sammen danner de et unikt fingeraftryk.

Citationsformater