Big Data and Multimodal Communication: A Perspective View

Costanza Navarretta, Lucretia Oemig

1 Citationer (Scopus)

Abstract

Humans communicate face-to-face through at least two modalities, the auditive modality, speech, and the visual modality, gestures, which comprise e.g. gaze movements, facial expressions, head movements, and hand gestures. The relation between speech and gesture is complex and partly depends on factors such as the culture, the communicative situation, the interlocutors and their relation. Investigating these factors in real data is vital for studying multimodal communication and building models for implementing natural multimodal communicative interfaces able to interact naturally with individuals of different age, culture, and needs. In this paper, we discuss to what extent big data “in the wild”, which are growing explosively on the internet, are useful for this purpose also in light of legal aspects about the use of personal data, comprising multimodal data downloaded from social media.

OriginalsprogEngelsk
TidsskriftIntelligent Systems Reference Library
Vol/bind159
Sider (fra-til)167-184
ISSN1868-4394
StatusUdgivet - 2019

Fingeraftryk

Dyk ned i forskningsemnerne om 'Big Data and Multimodal Communication: A Perspective View'. Sammen danner de et unikt fingeraftryk.

Citationsformater