Abstract
Pointing gestures are pervasive in human referring actions, and are often combined with spoken descriptions. Combining gesture and speech naturally to refer to objects is an essential task in multimodal NLG systems. However, the way gesture and speech should be combined in a referring act remains an open question. In particular, it is not clear whether, in planning a pointing gesture in conjunction with a description, an NLG system should seek to minimise the redundancy between them, e.g. by letting the pointing gesture indicate locative information, with other, non-locative properties of a referent included in the description. This question has a bearing on whether the gestural and spoken parts of referring acts are planned separately or arise from a common underlying computational mechanism. This paper investigates this question empirically, using machine-learning techniques on a new corpus of dialogues involving multimodal references to objects. Our results indicate that human pointing strategies interact with descriptive strategies. In particular, pointing gestures are strongly associated with the use of locative features in referring expressions.
Originalsprog | Engelsk |
---|---|
Titel | Proceedings of the 14th European Workshop on Natural Language Generation : ENLG'13 |
Forlag | Association for Computational Linguistics |
Publikationsdato | 2013 |
Sider | 82-91 |
ISBN (Elektronisk) | 978-1-937284-56-5 |
Status | Udgivet - 2013 |
Begivenhed | 14th European Workshop on Natural Language Generation (ENLG'13) - Sofia, Armenien Varighed: 8 aug. 2013 → 9 aug. 2013 |
Workshop
Workshop | 14th European Workshop on Natural Language Generation (ENLG'13) |
---|---|
Land/Område | Armenien |
By | Sofia |
Periode | 08/08/2013 → 09/08/2013 |