Crowdsourcing and annotating NER for Twitter #drift

Hege Fromreide, Dirk Hovy, Anders Søgaard

20 Citationer (Scopus)

Abstract

We present two new NER datasets for Twitter; a manually annotated set of 1, 467 tweets (k = 0.942) and a set of 2, 975 expert-corrected, crowdsourced NER annotated tweets from the dataset described in Finin et al. (2010). In our experiments with these datasets, we observe two important points: (a) language drift on Twitter is significant, and while off-the-shelf systems have been reported to perform well on in-sample data, they often perform poorly on new samples of tweets, (b) state-of-the-art performance across various datasets can be obtained from crowdsourced annotations, making it more feasible to "catch up" with language drift.

OriginalsprogEngelsk
TitelProceedings of the 9th International Conference on Language Resources and Evaluation : LREC2014
ForlagEuropean Language Resources Association
Publikationsdato2014
StatusUdgivet - 2014

Fingeraftryk

Dyk ned i forskningsemnerne om 'Crowdsourcing and annotating NER for Twitter #drift'. Sammen danner de et unikt fingeraftryk.

Citationsformater