Abstract
This paper deals with the results of a machine learning experiment
conducted on annotated gesture data from two case studies (Danish
and Estonian). The data concern mainly facial displays, that are
annotated with attributes relating to shape and dynamics, as well as
communicative function. The results of the experiments show that the
granularity of the attributes used seems appropriate for the task of
distinguishing the desired communicative functions. This is a
promising result in view of a future automation of the annotation
task.
conducted on annotated gesture data from two case studies (Danish
and Estonian). The data concern mainly facial displays, that are
annotated with attributes relating to shape and dynamics, as well as
communicative function. The results of the experiments show that the
granularity of the attributes used seems appropriate for the task of
distinguishing the desired communicative functions. This is a
promising result in view of a future automation of the annotation
task.
Original language | English |
---|---|
Title of host publication | Proceedings of the 5th International Workshop, MLMI 2008 |
Editors | Andrei Popescu-Belis, Rainer Stiefelhagen |
Number of pages | 12 |
Publisher | Springer |
Publication date | 2008 |
Pages | 38-49 |
ISBN (Print) | 978-3-540-85852-2 |
Publication status | Published - 2008 |
Event | Machine Learning for Multimodal Interaction, 5th International Workshop, MLMI 2008 - Utrecht, Netherlands Duration: 8 Sept 2008 → 10 Sept 2008 Conference number: 5 |
Conference
Conference | Machine Learning for Multimodal Interaction, 5th International Workshop, MLMI 2008 |
---|---|
Number | 5 |
Country/Territory | Netherlands |
City | Utrecht |
Period | 08/09/2008 → 10/09/2008 |
Series | Lecture Notes in Computer Science LNCS 5237 |
---|---|
ISSN | 0302-9743 |