Learning when to point: A data-driven approach

Albert Gatt, Patrizia Paggio

5 Citations (Scopus)

Abstract

The relationship between how people describe objects and when they choose to point is complex and likely to be influenced by factors related to both perceptual and discourse context. In this paper, we explore these interactions using machine-learning on a dialogue corpus, to identify multimodal referential strategies that can be used in automatic multimodal generation. We show that the decision to use a pointing gesture depends on features of the accompanying description (especially whether it contains spatial information), and on visual properties, especially distance or separation of a referent from its previous referent.

Original languageEnglish
Title of host publicationProceedings of the 25th International Conference on Computational Linguistics (COLING '14)
Number of pages10
Place of PublicationDublin, Ireland
PublisherAssociation for Computational Linguistics
Publication date2014
Pages2007-2017
Publication statusPublished - 2014
EventColing 2014 - Dublin, Ireland
Duration: 23 Aug 201429 Aug 2014

Conference

ConferenceColing 2014
Country/TerritoryIreland
CityDublin
Period23/08/201429/08/2014

Fingerprint

Dive into the research topics of 'Learning when to point: A data-driven approach'. Together they form a unique fingerprint.

Cite this