Multi-view and multi-task training of RST discourse parsers

Chloé Elodie Braud, Barbara Plank, Anders Søgaard

20 Citations (Scopus)
51 Downloads (Pure)

Abstract

We experiment with different ways of training LSTM networks to predict RST discourse trees. The main challenge for RST discourse parsing is the limited amounts of training data. We combat this by regularizing our models using task supervision from related tasks as well as alternative views on discourse structures. We show that a simple LSTM sequential discourse parser takes advantage of this multi-view and multi-task framework with 12-15% error reductions over our baseline (depending on the metric) and results that rival more complex state-of-the-art parsers.

Original languageEnglish
Title of host publicationThe 26th International Conference on Computational Linguistics : proceedings of COLING 2016: technical Papers
Number of pages11
Publication date2016
Pages1903-1913
ISBN (Electronic)978-4-87974-702-0
Publication statusPublished - 2016
EventThe 26th International Conference on Computational Linguistics - Osaka, Japan
Duration: 11 Dec 201616 Dec 2016
Conference number: 26

Conference

ConferenceThe 26th International Conference on Computational Linguistics
Number26
Country/TerritoryJapan
CityOsaka
Period11/12/201616/12/2016

Fingerprint

Dive into the research topics of 'Multi-view and multi-task training of RST discourse parsers'. Together they form a unique fingerprint.

Cite this