PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
This paper presents an extension to our previously developed fusion framework [10] involving a depth camera and an inertial sensor in order to improve its view invariance aspect for real-time human action recognition applications. A computationally efficient view estimation based on skeleton joints is considered in order to select the most relevant depth training data when recognizing test samples. Two collaborative representation classifiers, one for depth features and one for inertial features, are appropriately weighted to generate a decision making probability. The experimental results applied to a multi-view human action dataset show that this weighted extension improves the recognition performance by about 5% over equally weighted fusion deployed in our previous fusion framework.
Chen Chen,Huiyan Hao,Roozbeh Jafari, andNasser Kehtarnavaz
"Weighted fusion of depth and inertial data to improve view invariance for real-time human action recognition", Proc. SPIE 10223, Real-Time Image and Video Processing 2017, 1022307 (1 May 2017); https://doi.org/10.1117/12.2261823
ACCESS THE FULL ARTICLE
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
The alert did not successfully save. Please try again later.
Chen Chen, Huiyan Hao, Roozbeh Jafari, Nasser Kehtarnavaz, "Weighted fusion of depth and inertial data to improve view invariance for real-time human action recognition," Proc. SPIE 10223, Real-Time Image and Video Processing 2017, 1022307 (1 May 2017); https://doi.org/10.1117/12.2261823