18 March 2015 Manifolds for pose tracking from monocular video
Author Affiliations +
Abstract
We formulate a simple human-pose tracking theory from monocular video based on the fundamental relationship between changes in pose and image motion vectors. We investigate the natural embedding of the low-dimensional body pose space into a high-dimensional space of body configurations that behaves locally in a linear manner. The embedded manifold facilitates the decomposition of the image motion vectors into basis motion vector fields of the tangent space to the manifold. This approach benefits from the style invariance of image motion flow vectors, and experiments to validate the fundamental theory show reasonable accuracy (within 4.9 deg of the ground truth).
© 2015 SPIE and IS&T 1017-9909/2015/$25.00 © 2015 SPIE and IS&T
Saurav Basu, Joshua Poulin, and Scott T. Acton "Manifolds for pose tracking from monocular video," Journal of Electronic Imaging 24(2), 023014 (18 March 2015). https://doi.org/10.1117/1.JEI.24.2.023014
Published: 18 March 2015
Lens.org Logo
CITATIONS
Cited by 4 scholarly publications.
Advertisement
Advertisement
RIGHTS & PERMISSIONS
Get copyright permission  Get copyright permission on Copyright Marketplace
KEYWORDS
Video

Cameras

Head

Motion estimation

Model-based design

Optical tracking

Statistical analysis

RELATED CONTENT

Person real-time tracking for video communication
Proceedings of SPIE (April 27 2001)
Model-based human action recognition
Proceedings of SPIE (February 26 2010)
Context-aware tracking of small targets in video
Proceedings of SPIE (September 01 2009)

Back to Top