Speech and gestures: computational linguistic studies

Activity: Talk or presentation typesLecture and oral contribution

Description

Face-to-face communication is multimodal since at least two modalities are involved, the auditive (speech) and the visual (gestures). Speech and gestures are related semantically and temporally on many levels. Co-speech gestures, which comprise e.g. head movements, facial expressions, body posture, arm and hand gestures are co-expressive but not redundant. Discovering the relation between speech and gestures is important for understanding communication, but has also practical applications such as the construction of ICT. In the talk, I will present studies investigating multimodal communication from a computational linguistic point of view. In particular, I will focus on the collection and annotation of multimodal corpora, which in this context are video- and audio-recorded monologues and dialogues, and research conducted on these data at the Centre for Language Technology, in order to investigate the relationship between speech and gestures at the prosodic, syntactic, semantic and pragmatic level.
Period9 Oct 2018
Event titleCLARIN Annual Conference 2018
Event typeConference
LocationPisa, ItalyShow on map
Degree of RecognitionInternational