Abstract
We experiment with different ways of training LSTM networks to predict RST discourse trees. The main challenge for RST discourse parsing is the limited amounts of training data. We combat this by regularizing our models using task supervision from related tasks as well as alternative views on discourse structures. We show that a simple LSTM sequential discourse parser takes advantage of this multi-view and multi-task framework with 12-15% error reductions over our baseline (depending on the metric) and results that rival more complex state-of-the-art parsers.
Original language | English |
---|---|
Title of host publication | The 26th International Conference on Computational Linguistics : proceedings of COLING 2016: technical Papers |
Number of pages | 11 |
Publication date | 2016 |
Pages | 1903-1913 |
ISBN (Electronic) | 978-4-87974-702-0 |
Publication status | Published - 2016 |
Event | The 26th International Conference on Computational Linguistics - Osaka, Japan Duration: 11 Dec 2016 → 16 Dec 2016 Conference number: 26 |
Conference
Conference | The 26th International Conference on Computational Linguistics |
---|---|
Number | 26 |
Country/Territory | Japan |
City | Osaka |
Period | 11/12/2016 → 16/12/2016 |