Abstract
This paper deals with the results of a machine learning experiment
conducted on annotated gesture data from two case studies (Danish
and Estonian). The data concern mainly facial displays, that are
annotated with attributes relating to shape and dynamics, as well as
communicative function. The results of the experiments show that the
granularity of the attributes used seems appropriate for the task of
distinguishing the desired communicative functions. This is a
promising result in view of a future automation of the annotation
task.
conducted on annotated gesture data from two case studies (Danish
and Estonian). The data concern mainly facial displays, that are
annotated with attributes relating to shape and dynamics, as well as
communicative function. The results of the experiments show that the
granularity of the attributes used seems appropriate for the task of
distinguishing the desired communicative functions. This is a
promising result in view of a future automation of the annotation
task.
Originalsprog | Engelsk |
---|---|
Titel | Proceedings of the 5th International Workshop, MLMI 2008 |
Redaktører | Andrei Popescu-Belis, Rainer Stiefelhagen |
Antal sider | 12 |
Forlag | Springer |
Publikationsdato | 2008 |
Sider | 38-49 |
ISBN (Trykt) | 978-3-540-85852-2 |
Status | Udgivet - 2008 |
Begivenhed | Machine Learning for Multimodal Interaction, 5th International Workshop, MLMI 2008 - Utrecht, Holland Varighed: 8 sep. 2008 → 10 sep. 2008 Konferencens nummer: 5 |
Konference
Konference | Machine Learning for Multimodal Interaction, 5th International Workshop, MLMI 2008 |
---|---|
Nummer | 5 |
Land/Område | Holland |
By | Utrecht |
Periode | 08/09/2008 → 10/09/2008 |
Navn | Lecture Notes in Computer Science LNCS 5237 |
---|---|
ISSN | 0302-9743 |