Abstract
This paper is about the automatic annotation of head movements in videos of face-to-face conversations. Manual annotation of gestures is resource consuming, and modelling gesture behaviours in different types of communicative settings requires many types of annotated data. Therefore, developing methods for automatic annotation is crucial. We present an approach where an SVM classifier learns to classify head movements based on measurements of velocity, acceleration, and the third derivative of position with respect to time, jerk. Consequently, annotations of head movements are added to new video data. The results of the automatic annotation are evaluated against manual annotations in the same data and show an accuracy of 73.47% with respect to these. The results also show that using jerk improves accuracy.
Originalsprog | Engelsk |
---|---|
Artikelnummer | 003 |
Tidsskrift | Linköping Electronic Conference Proceedings |
Udgave nummer | 141 |
Sider (fra-til) | 10-17 |
ISSN | 1650-3740 |
Status | Udgivet - 2017 |
Begivenhed | Nordic and European Symposium on Multimodal Communication : 7th Nordic and 4th European Symposium on Multimodal Communication - University of Copenhagen, Copenhagen, Danmark Varighed: 29 sep. 2016 → 30 sep. 2016 Konferencens nummer: 4th, 7th http://mmsym.org/?page_id=412 |
Konference
Konference | Nordic and European Symposium on Multimodal Communication |
---|---|
Nummer | 4th, 7th |
Lokation | University of Copenhagen |
Land/Område | Danmark |
By | Copenhagen |
Periode | 29/09/2016 → 30/09/2016 |
Internetadresse |