Estimation of Joint types and Joint Limits from Motion capture data

Morten Pol Engell-Nørregård, Kenny Erleben

5 Citations (Scopus)

Abstract

It is time-consuming for an animator to explicitly model joint types and joint limits of articulated figures. In this paper we describe a simple and fast approach to automated joint estimation from motion capture data of articulated figures. Our method will make the joint modeling more efficient and less time consuming for the animator by providing a good starting estimate that can be fine-tuned or extended by the animator if she wishes, without restricting her artistic freedom. Our method is simple, easy to implement and specific for the types of articulated figures used in interactive animation such as computer games. Other work for joint limit modeling consider more complex and general purpose models. However, these are not immediately suitable for inverse kinematics skeletons used in interactive applications.
Original languageEnglish
Title of host publicationWSCG '2009 : the 17th International Conference in Central Europe on Computer Graphics, Visualization and Computer Vision in co-operation with Eurographics : University of West Bohemia, Plzen, Czech Republic, February 2-5, 2009 : 17th International Conference in Central Europe on Computer Graphics, Visualization and Computer Vision 2009, Computer Graphics, Visualization and Computer Vision 2009
EditorsVaclav Scala, Min Chen
Publication date2009
Pages9-16
ISBN (Electronic)978-80-86943-93-0
Publication statusPublished - 2009
EventThe 17th international Conference in Central Europe on Computer Graphics, Visualization and Computer Vision 2009 - Plzen, Czech Republic
Duration: 2 Feb 20095 Feb 2009
Conference number: 17

Conference

ConferenceThe 17th international Conference in Central Europe on Computer Graphics, Visualization and Computer Vision 2009
Number17
Country/TerritoryCzech Republic
CityPlzen
Period02/02/200905/02/2009

Fingerprint

Dive into the research topics of 'Estimation of Joint types and Joint Limits from Motion capture data'. Together they form a unique fingerprint.

Cite this