Abstract
Supervised NLP tools and on-line services are often used on data that is very different from the manually annotated data used during development. The performance loss observed in such cross-domain applications is often attributed to covari-ate shifts, with out-of-vocabulary effects as an important subclass. Many discriminative learning algorithms are sensitive to such shifts because highly indicative features may swamp other indicative features. Regularized and adversarial learning algorithms have been proposed to be more robust against covariate shifts. We present a new perceptron learning algorithm using antagonistic adversaries and compare it to previous proposals on 12 multilingual cross-domain part-of-speech tagging datasets. While previous approaches do not improve on our supervised baseline, our approach is better across the board with an average 4% error reduction.
Originalsprog | Engelsk |
---|---|
Titel | Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics |
Vol/bind | 2 |
Forlag | Association for Computational Linguistics |
Publikationsdato | 2013 |
Sider | 640-644 |
ISBN (Elektronisk) | 978-1-937284-50-3 |
Status | Udgivet - 2013 |