Abstract
In linguistic annotation projects, we typically develop annotation guidelines to minimize disagreement. However, in this position paper we question whether we should actually limit the disagreements between annotators, rather than embracing them. We present an empirical analysis of part-of-speech annotated data sets that suggests that disagreements are systematic across domains and to a certain extend also across languages. This points to an underlying ambiguity rather than random errors. Moreover, a quantitative analysis of tag confusions reveals that the majority of disagreements are due to linguistically debatable cases rather than annotation errors. Specifically, we show that even in the absence of annotation guidelines only 2% of annotator choices are linguistically unmotivated.
Original language | English |
---|---|
Title of host publication | Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers) |
Volume | volume 2 |
Place of Publication | Baltimore, Maryland |
Publisher | Association for Computational Linguistics |
Publication date | 2014 |
Pages | 507-511 |
Publication status | Published - 2014 |