Experiments with crowdsourced re-annotation of a POS tagging data set

Dirk Hovy, Barbara Plank, Anders Søgaard

21 Citations (Scopus)

Abstract

Crowdsourcing lets us collect multiple annotations for an item from several annotators. Typically, these are annotations for non-sequential classification tasks. While there has been some work on crowdsourcing named entity annotations, researchers have largely assumed that syntactic tasks such as part-of-speech (POS) tagging cannot be crowdsourced. This paper shows that workers can actually annotate sequential data almost as well as experts. Further, we show that the models learned from crowdsourced annotations fare as well as the models learned from expert annotations in downstream tasks.

Original languageEnglish
Title of host publicationProceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)
Place of PublicationBaltimore, Maryland
PublisherAssociation for Computational Linguistics
Publication dateJun 2014
Pages377-382
Publication statusPublished - Jun 2014

Fingerprint

Dive into the research topics of 'Experiments with crowdsourced re-annotation of a POS tagging data set'. Together they form a unique fingerprint.

Cite this