Regular Articles

Skeleton-based viewpoint invariant transformation for motion analysis

[+] Author Affiliations
Yun Han

Tongji University, College of Electronics and Information Engineering, Cao an Road 4800 Dian Xin Building 629, Shanghai 201804, China

Sheng Luen Chung

National Taiwan University of Science and Technology, Department of Electrical Engineering, Taipei 10607, Taiwan

Jeng Sheng Yeh

Ming Chuan University, Department of Computer and Communication Engineering, Taipei 150001, Taiwan

Qi Jun Chen

Tongji University, College of Electronics and Information Engineering, Cao an Road 4800 Dian Xin Building 629, Shanghai 201804, China

J. Electron. Imaging. 23(4), 043021 (Aug 14, 2014). doi:10.1117/1.JEI.23.4.043021
History: Received April 21, 2014; Revised June 17, 2014; Accepted July 15, 2014
Text Size: A A A

Abstract.  Viewpoint variation has been a major challenge in dealing with comparison-based image processing. Reduction or total removal of viewpoint variation has been the common pursuit of many processing applications such as human motion analysis and gesture analysis. By exploiting three-dimensional (3-D) skeletal joints information provided by RGB-D cameras such as Kinect, this study proposes a skeleton-based viewpoint invariant transformation (SVIT) technique that transforms the 3-D skeleton data into an orthogonal coordinate system constructed by the three most stable joints of detected person’s upper torso: the left shoulder, the right shoulder, and the spine. The proposed 3-D transformation can eliminate observation variations caused not only by viewpoint differences but also individual differences in body swing movement. With reference to the human activity database MSRDailyAct3D and our recording data, experiments on human motion analysis were designed and conducted to contrast the effectiveness of the proposed SVIT as a preprocessing step before comparing the test data and the sample data, with very different viewpoints. This results show that, with SVIT as a pre-processing for activity analysis, the performance of correct activity identification has increased from 45% to 60%. SVIT outperforms other state-of-the-art view invariant human motion analysis based on human body centric approaches.

Figures in this Article
© 2014 SPIE and IS&T

Citation

Yun Han ; Sheng Luen Chung ; Jeng Sheng Yeh and Qi Jun Chen
"Skeleton-based viewpoint invariant transformation for motion analysis", J. Electron. Imaging. 23(4), 043021 (Aug 14, 2014). ; http://dx.doi.org/10.1117/1.JEI.23.4.043021


Access This Article
Sign in or Create a personal account to Buy this article ($20 for members, $25 for non-members).

Some tools below are only available to our subscribers or users with an online account.

Related Content

Customize your page view by dragging & repositioning the boxes below.

Related Book Chapters

Topic Collections

PubMed Articles
Advertisement
  • Don't have an account?
  • Subscribe to the SPIE Digital Library
  • Create a FREE account to sign up for Digital Library content alerts and gain access to institutional subscriptions remotely.
Access This Article
Sign in or Create a personal account to Buy this article ($20 for members, $25 for non-members).
Access This Proceeding
Sign in or Create a personal account to Buy this article ($15 for members, $18 for non-members).
Access This Chapter

Access to SPIE eBooks is limited to subscribing institutions and is not available as part of a personal subscription. Print or electronic versions of individual SPIE books may be purchased via SPIE.org.