How Well can We Learn Interpretable Entity Types from Text?

Dirk Hovy

3 Citations (Scopus)

Abstract

Many NLP applications rely on type systems to represent higher-level classes. Domain-specific ones are more informative, but have to be manually tailored to each task and domain, making them inflexible and expensive. We investigate a largely unsupervised approach to learning interpretable, domain-specific entity types from unlabeled text. It assumes that any common noun in a domain can function as potential entity type, and uses those nouns as hidden variables in a HMM. To constrain training, it extracts co-occurrence dictionaries of entities and common nouns from the data. We evaluate the learned types by measuring their prediction accuracy for verb arguments in several domains. The results suggest that it is possible to learn domain-specific entity types from unlabeled data. We show significant improvements over an informed baseline, reducing the error rate by 56%.

Original languageEnglish
Title of host publicationProceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)
Place of PublicationBaltimore, Maryland
PublisherAssociation for Computational Linguistics
Publication date2014
Pages482-487
Publication statusPublished - 2014

Fingerprint

Dive into the research topics of 'How Well can We Learn Interpretable Entity Types from Text?'. Together they form a unique fingerprint.

Cite this