Abstract
This paper deals with multimodal behaviours in an annotated corpus of first encounters. More specifically, it presents a study aimed to determine common relations between facial expressions connected to emotions and co-speech. Emotions are described through emotion labels and bipolar values in three emotional dimensions, Pleasure, Arousal and Dominance, while the transcriptions of speech comprise words, quasi words, pauses, and filled pauses. The study establishes that there is a strong relation between specific communicative aspects, facial expressions and co-speech. We found that some emotions denoting facial expressions always co-occur with speech, and are often related to specific speech tokens, while other often occur uni-modally, that is without co-occurring speech. We have also observed large individual differences in the amount of facial expressions produced, but we did not find a correlation between the amount of speech tokens and facial expressions produced by the participants in the first encounters. Our study also confirms preceding research [31] suggesting that the most common emotions in the first encounters partly depend on the specific social activity. These findings are important for understanding human behaviours in face to face communications, but they also contribute to the construction and evaluation of corpus based models of plausible affective COGINFOCOM systems.
Original language | English |
---|---|
Journal | Intelligent Decision Technologies |
Volume | 8 |
Pages (from-to) | 255-263 |
ISSN | 1872-4981 |
DOIs | |
Publication status | Published - 2014 |