Detecting head movements in video-recorded dyadic conversations

Abstract

This paper is about the automatic recognition of head movements in videos of face-to-face dyadic conversations. We present an approach where recognition of head movements is casted as a multimodal frame classification problem based on visual and acoustic features. The visual features include velocity, acceleration, and jerk values associated with head movements, while the acoustic ones are pitch and intensity measurements from the co-occuring speech. We present the results obtained by training and testing a number of classifiers on manually annotated data from two conversations. The best performing classifier, a Multilayer Perceptron trained using all the features, obtains 0.75 accuracy and outperforms the mono-modal baseline classifier.
Original languageEnglish
Title of host publicationProceedings of the International Conference on Multimodal Interaction: Adjunct
Number of pages6
Place of PublicationNew York
PublisherAssociation for Computing Machinery
Publication date16 Oct 2018
Pages1-6
ISBN (Print)978-1-4503-6002-9
DOIs
Publication statusPublished - 16 Oct 2018

Fingerprint

Dive into the research topics of 'Detecting head movements in video-recorded dyadic conversations'. Together they form a unique fingerprint.

Cite this