Transfer Learning in Multimodal Corpora

8 Citations (Scopus)

Abstract

People use both their speech and their body when they communicate face to face, thus human communication is multimodal. The development of multimodal coginfocom systems requires models of the relation between the various modalities, but many studies have shown that multimodal behaviours depend on numerous factors comprising the culture, the setting and the communicative situation. Thus, annotated multimodal corpora of different types must be produced. However, annotating multimodal corpora is extremely resource consuming. Therefore, it is important to reuse existing resources also for annotating unseen data in different domains. The main aims of this paper are to investigate a) the distance between the annotations of two multimodal corpora of different type, the extent to which the annotations of a corpus can be used as training data to identify communicative behaviours in the second corpus automatically, c) the effect of the amount of annotations on classification. The results of our study indicate that using the annotations of one corpus to annotate specific communicative phenomena in another corpus gives good results with respect to a simple majority classifier, but they also confirm that multimodal behaviours vary extensively from one type of conversation to the other. Our experiments also indicate that the results of supervised learning on conversational data of limited size can be improved by using the annotations of corpora of different types.

Original languageEnglish
Title of host publicationIEEE 4th International Conference on Cognitive Infocommunications
PublisherIEEE
Publication date2013
Pages195-200
Publication statusPublished - 2013

Fingerprint

Dive into the research topics of 'Transfer Learning in Multimodal Corpora'. Together they form a unique fingerprint.

Cite this