Abstract
The relationship between how people describe objects and when they choose to point is complex and likely to be influenced by factors related to both perceptual and discourse context. In this paper, we explore these interactions using machine-learning on a dialogue corpus, to identify multimodal referential strategies that can be used in automatic multimodal generation. We show that the decision to use a pointing gesture depends on features of the accompanying description (especially whether it contains spatial information), and on visual properties, especially distance or separation of a referent from its previous referent.
Originalsprog | Engelsk |
---|---|
Titel | Proceedings of the 25th International Conference on Computational Linguistics (COLING '14) |
Antal sider | 10 |
Udgivelsessted | Dublin, Ireland |
Forlag | Association for Computational Linguistics |
Publikationsdato | 2014 |
Sider | 2007-2017 |
Status | Udgivet - 2014 |
Begivenhed | Coling 2014 - Dublin, Irland Varighed: 23 aug. 2014 → 29 aug. 2014 |
Konference
Konference | Coling 2014 |
---|---|
Land/Område | Irland |
By | Dublin |
Periode | 23/08/2014 → 29/08/2014 |