Abstract
The paper investigates the relation between emotions and feedback facial expressions in video and audio recorded Danish dyadic first encounters. In particular, we train a classifier on the manual annotations of the corpus in order to investigate to which extent the encoding of emotions contribute to the prediction of the feedback functions of facial expressions. This work builds upon and extends previous research on (a) the annotation and analysis of emotions in the corpus in which it was suggested that emotions are related to specific communicative functions, and (b) the prediction of feedback head movements using multimodal information. The results of the experiments show that information on multimodal behaviours which co-occur with the facial expressions improve the classifier performance. The improvement of the F-measure with respect to the unimodal baseline is of 0.269 and this result is parallel to that obtained for head movements in the same corpus. The experiments also show that the annotations of emotions contribute further to the prediction of feedback facial expressions confirming their relation. The best results are obtained training the classifier on the shape of facial expressions and co-occurring head movements, emotion labels, the gesturer’s and the interlocutor’s speech and can be used in applied systems.
Originalsprog | Engelsk |
---|---|
Tidsskrift | Journal on Multimodal User Interfaces |
Vol/bind | 8 |
Sider (fra-til) | 135 |
Antal sider | 141 |
ISSN | 1783-7677 |
DOI | |
Status | Udgivet - jun. 2014 |