Regular Articles

Robust visual multitask tracking via composite sparse model

[+] Author Affiliations
Bo Jin

Shanghai Jiao Tong University, School of Aeronautics and Astronautics, 800 Dongchuan Road, Shanghai 200240, China

Zhongliang Jing

Shanghai Jiao Tong University, School of Aeronautics and Astronautics, 800 Dongchuan Road, Shanghai 200240, China

Meng Wang

Chinese Academy of Sciences, Shanghai Institute of Technical Physics, 500 Yutian Road, Shanghai 200083, China

Han Pan

Shanghai Jiao Tong University, School of Aeronautics and Astronautics, 800 Dongchuan Road, Shanghai 200240, China

J. Electron. Imaging. 23(6), 063022 (Dec 24, 2014). doi:10.1117/1.JEI.23.6.063022
History: Received June 23, 2014; Accepted November 18, 2014
Text Size: A A A

Abstract.  Recently, multitask learning was applied to visual tracking by learning sparse particle representations in a joint task, which led to the so-called multitask tracking algorithm (MTT). Although MTT shows impressive tracking performances by mining the interdependencies between particles, the individual feature of each particle is underestimated. The utilized L1,q norm regularization assumes all features are shared between all particles and results in nearly identical representation coefficients in nonsparse rows. We propose a composite sparse multitask tracking algorithm (CSMTT). We develop a composite sparse model to formulate the object appearance as a combination of the shared feature component, the individual feature component, and the outlier component. The composite sparsity is achieved via the L1, and L1,1 norm minimization, and is optimized by the alternating direction method of multipliers, which provides a favorable reconstruction performance and an impressive computational efficiency. Moreover, a dynamical dictionary updating scheme is proposed to capture appearance changes. CSMTT is tested on real-world video sequences under various challenges, and experimental results show that the composite sparse model achieves noticeable lower reconstruction errors and higher computational speeds than traditional sparse models, and CSMTT has consistently better tracking performances against seven state-of-the-art trackers.

Figures in this Article
© 2014 SPIE and IS&T

Citation

Bo Jin ; Zhongliang Jing ; Meng Wang and Han Pan
"Robust visual multitask tracking via composite sparse model", J. Electron. Imaging. 23(6), 063022 (Dec 24, 2014). ; http://dx.doi.org/10.1117/1.JEI.23.6.063022


Access This Article
Sign in or Create a personal account to Buy this article ($20 for members, $25 for non-members).

Some tools below are only available to our subscribers or users with an online account.

Related Content

Customize your page view by dragging & repositioning the boxes below.

Related Book Chapters

Topic Collections

Advertisement
  • Don't have an account?
  • Subscribe to the SPIE Digital Library
  • Create a FREE account to sign up for Digital Library content alerts and gain access to institutional subscriptions remotely.
Access This Article
Sign in or Create a personal account to Buy this article ($20 for members, $25 for non-members).
Access This Proceeding
Sign in or Create a personal account to Buy this article ($15 for members, $18 for non-members).
Access This Chapter

Access to SPIE eBooks is limited to subscribing institutions and is not available as part of a personal subscription. Print or electronic versions of individual SPIE books may be purchased via SPIE.org.