Aktiviteter pr. år
Abstract
People use both their speech and their body when they communicate face to face, thus human communication is multimodal. The development of multimodal coginfocom systems requires models of the relation between the various modalities, but many studies have shown that multimodal behaviours depend on numerous factors comprising the culture, the setting and the communicative situation. Thus, annotated multimodal corpora of different types must be produced. However, annotating multimodal corpora is extremely resource consuming. Therefore, it is important to reuse existing resources also for annotating unseen data in different domains. The main aims of this paper are to investigate a) the distance between the annotations of two multimodal corpora of different type, the extent to which the annotations of a corpus can be used as training data to identify communicative behaviours in the second corpus automatically, c) the effect of the amount of annotations on classification. The results of our study indicate that using the annotations of one corpus to annotate specific communicative phenomena in another corpus gives good results with respect to a simple majority classifier, but they also confirm that multimodal behaviours vary extensively from one type of conversation to the other. Our experiments also indicate that the results of supervised learning on conversational data of limited size can be improved by using the annotations of corpora of different types.
Originalsprog | Engelsk |
---|---|
Titel | IEEE 4th International Conference on Cognitive Infocommunications |
Forlag | IEEE |
Publikationsdato | 2013 |
Sider | 195-200 |
Status | Udgivet - 2013 |
Fingeraftryk
Dyk ned i forskningsemnerne om 'Transfer Learning in Multimodal Corpora'. Sammen danner de et unikt fingeraftryk.Aktiviteter
- 1 Foredrag og mundtlige bidrag
-
Transfer Learning in Multimodal Corpora
Navarretta, C. (Oplægsholder)
3 dec. 2013Aktivitet: Tale eller præsentation - typer › Foredrag og mundtlige bidrag