Regular Articles

Real-time tracking of deformable objects based on combined matching-and-tracking

[+] Author Affiliations
Junhua Yan

Nanjing University of Aeronautics and Astronautics, College of Astronautics, Nanjing 210016, China

613th Research Institute of Aviation Industry Corporation of China, Science and Technology on Electro-optic Control Laboratory, Luoyang, Henan 471009, China

Zhigang Wang, Shunfei Wang

Nanjing University of Aeronautics and Astronautics, College of Astronautics, Nanjing 210016, China

J. Electron. Imaging. 25(2), 023011 (Mar 28, 2016). doi:10.1117/1.JEI.25.2.023011
History: Received July 4, 2015; Accepted March 2, 2016
Text Size: A A A

Open Access Open Access

Abstract.  Visual tracking is very challenging due to the existence of several sources of variations, such as partial occlusion, deformation, scale variation, rotation, and background clutter. A model-free tracking method based on fusing accelerated features using fast explicit diffusion in nonlinear scale spaces (AKAZE) and KLT features is presented. First, matching-keypoints are generated by finding corresponding keypoints from the consecutive frames and the object template, then tracking-keypoints are generated using the forward–backward flow tracking method, and at last, credible keypoints are obtained by AKAZE-KLT tracking (AKT) algorithm. To avoid the instability of a statistical method, the median method is adopted to compute the object’s location, scale, and rotation in each frame. The experimental results show that the AKT algorithm has strong robustness and can achieve accurate tracking especially under conditions of partial occlusion, scale variation, rotation, and deformation. The tracking performance shows higher robustness and accuracy in a variety of datasets and the average frame rate reaches 78 fps, showing good performance in real time.

Figures in this Article

Visual object tracking, which is the process of estimating the motion parameters such as location, scale, and rotation of the object in an image sequence given the initial box in the first frame, is a popular problem in computer vision, with wide-ranging applications including visual navigation, military reconnaissance, and human–computer interaction.1,2 Although significant progress has been made in recent years, the problem is still difficult due to factors such as partial occlusion, deformation, scale variation, rotation, and background clutter.3 To solve these problems, numerous algorithms have been proposed.46

The online learning algorithm is one of the useful algorithms that has been widely used to solve the problem of objects’ changing appearance. As some information of the objects to be tracked is known in advance in various scenarios, it is possible to employ prior knowledge to design the tracker. However, for other applications, as nothing about the objects of interest is known beforehand, no prior knowledge can be of use. Also, it is impossible to employ offline machine learning techniques to achieve efficient tracking because the appearance of an object is likely to vary due to its constant movements and also under different environmental conditions, such as varying level of brightness.7,8 Instead, online learning algorithms have been employed to adapt the object model to the abovementioned uncertainties. In practice, however, updating a model often introduces errors as it is difficult to explicitly assign hard class labels.

To efficiently track the constantly changing object and avoid the errors caused by an online learning algorithm, a model that precisely represents the object is needed. Various forms of representation of the object are used in practice, for example: points,9,10 contours,11,12 optical flow,13,14 or articulated models.15,16 Models that decompose the object into parts are more robust,17,18 as local changes only affect individual parts. Even when individual parts are lost or in an erroneous state, other object parts can still represent the object well. Keypoint, such as SIFT,19 SURF,20 ORB,21 AKAZE,22 and so on, is a representative kind of local feature that has been widely used in image fusion, object recognition, and other fields.

In this paper, a model-free tracking method based on fusing AKAZE and KLT features is proposed. The brief procedure is as follows: first, generate matching-keypoints by finding corresponding keypoints from the consecutive frames and the object template, then generate tracking-keypoints using the forward–backward flow tracking method, and at last, obtain credible keypoints by AKT fusion algorithm. To avoid the instability of a statistical method, the median method is adopted to compute object’s location, scale, and rotation in each frame.

AKAZE22 is regarded as the improved version of SIFT features and SURF. It is a more stable feature detection algorithm. Traditional SIFT and SURF feature detection algorithms build scale space by the linear Gaussian pyramid. However, this kind of linear decomposition can cause loss of accuracy, object’s edge blur, and loss of details. In order to solve these problems, AKAZE algorithm uses the method based on nonlinear scale space. The fast explicit diffusion (FED)23 is used to construct scale space. By using this method, any step length can be applied. Compared to SIFT and SURF, the computational complexity is greatly reduced and the robustness is improved. In the following subsections, the detailed procedures of constructing nonlinear scale space using FED scheme will be illustrated. The process of feature detection and the effects of feature description of AKAZE algorithm based on modified-local difference binary (M-LDB) will then be discussed.

Building Nonlinear Scale Space

Similar to SIFT in the construction of the nonlinear scale space, scale level increases logarithmically. The scale space constructed has Pait(x,y) octaves and each octave has Va layers. Different octaves and layers are marked with serial numbers o and s, respectively. The relationship between them and the scale parameter σ is shown in the equation below: Display Formula

σi(o,s)=2o+s/S,(1)
where o[0O1], s[0S1], i[0M1]. M is the total number of images that remain after filtration by the filter. Since the nonlinear diffusion filter is based on the scale of time, scale parameters σi with the unit of pixel is transformed to the unit of time, as shown below: Display Formula
ti=12σi2,i[0M],(2)
where ti is o[0O1], s[0S1], i[0M] called evolutionary time. For each input image, a Gaussian filter is first applied, then the gradient histogram of the image is calculated. The contrast factor Pit(x,y) is set as 70% of the gradient histogram. In the case of two-dimensional (2-D) images, since the image derivative is one pixel grid size, the maximal step size tmax is 0.25 without violating stable conditions. Then by using a set of evolutionary time ti, all the images of scale space can be obtained using FED scheme.

Feature Detection

Feature detection of AKAZE is achieved by computing the Hessian local maxima after normalization of various scales for the filtered images in the nonlinear scale space. Calculation of a Hessian matrix is as follows: Display Formula

LHessiani=σi,norm2(LxxiLyyiLxyiLxyi),(3)
where σi,norm=σi/2oi. For computing the second order derivatives, the concatenated Scharr filters with step size σi,norm are applied. First, search for maxima of the detector response in spatial location. Check that the detector response is higher than a predefined threshold and that it is a maxima in a window of 3×9 pixels of three adjacent sublevels. Finally, the 2-D position of the keypoint is estimated with subpixel accuracy by fitting a 2-D quadratic function to the determinant of the Hessian response in a 3×3 pixels neighborhood and finding its maximum.

Feature Description

The diagram in Fig. 1, as supplied by Ref. 22, demonstrates LDB24 and M-LDB tests between grid divisions around a keypoint. The intensity is expressed by colorful grids and the gradients in x are expressed by the arrows. The feature description of AKAZE algorithm is based on M-LDB that exploits gradient and intensity information from the nonlinear scale space. And there are two main improvements of M-LDB compared with LDB: (1) rotation invariance is obtained by estimating the main orientation of the keypoint, as is done in KAZE,25 and rotating the grid of LDB accordingly. (2) A function of the scale σ is used as the subsample grids in steps instead of using the average of all pixels inside each subdivision of the grid. The scale-dependent sampling in turn makes the descriptor robust to changes in scale.

Graphic Jump Location
Fig. 1
F1 :

Binary test: (a) LDB and (b) M-LDB.

Forward–Backward Flow Tracking

Because of the environmental impact or object’s appearance change, the results of KLT often produce deviation, an evaluation method needs to be established to judge the accuracy of tracking results. Forward–backward error,26 which is based on the forward–backward continuity assumption, can effectively estimate the trajectory error of keypoints, i.e., if the object tracking is correct, then the tracking results are independent of time.

As shown in Fig. 2, for two adjacent frame It1 and It, xt1 is a random keypoint from object template in the frame It1, xt is the corresponding keypoint of xt1 in the frame It using forward tracking, and x^t1 is the corresponding keypoint of xt in the frame It1 using backward tracking. Forward–backward error is defined as the Euclidean distance between two keypoints in frame It1, i.e., et1FB=xt1x^t1. If error et1FB is bigger than a threshold which we set, the keypoint will be tracked falsely.

Graphic Jump Location
Fig. 2
F2 :

Forward–backward error in two adjacent frames.

We set the location of keypoint and status of forward–backward error as a pair pair (keypoint, status). If the status corresponding to keypoint is TRUE, which means the status of forward KLT and backward KLT themselves both must be TRUE, and error et1FB is smaller than the Euclidean distance threshold, then we call the keypoint with TRUE status tracking-keypoint. The rest are called failing tracking-keypoint.

Model of AKT

When calculating the homographic matrix between the initial keypoints and the current keypoint based on the traditional AKAZE algorithm, robust statistical methods, such as RANSAC and LMEDS, are usually adopted. However, when the number of outliers is too much, homographic matrix estimation will get poor results. So, in this paper, we put forward a tracking model called AKT, which can fundamentally eliminate the false matching-keypoints and reduce the proportion of outliers to effectively solve the problem of inaccurate parameter estimation.

The diagram in Fig. 3 demonstrates how the AKT algorithm fuses the matching-keypoints and tracking-keypoints by AKT algorithm. The collection of Va is composed of matching-keypoints Pait(x,y) in the t’th frame corresponding to the keypoints in object template obtained by AKAZE matching algorithm. And these matching-keypoints are represented by black circles in Fig. 3. The collection of Vk is composed of tracking-keypoints Pkit(x,y) in the t’th frame corresponding to the keypoints in object template obtained by KLT algorithm. And these tracking-keypoints are represented by gray circles in Fig. 3. There is a one-to-one correspondence between matching-keypoints and tracking-keypoints. Keypoints surrounded by the curve are credible keypoints in the t’th frame, which will make contributions to calculating an object’s location, scale, and rotation. The rest of the key points are outliers and thus, they are deleted. The credible keypoints are obtained by fusing matching-keypoints and tracking-keypoints. Its collection is V.

Sort the Euclidean distance lit between the i’th pair of matching-keypoints and tracking-keypoints in the t’th frame in descending order, then the experiments show that the optimal value lTht to be set as maximum allowable deviation threshold is in 0.26th of the distance sequence because enough credible keypoints are ensured, and the obvious false matching-keypoints can be removed. This means that the all but the bottom 0.74th pairs of points are valid matches. Set keypoint Pit(x,y) as the center, a as the width and the height of the patch as Mit. The degree of similarity between two patches is defined as Display Formula

α(Mi,Mit)=0.5[βNCC(Mi,Mit)+1],(4)
where βNCC is the normalization correlation coefficient. Set minimum allowed similarity threshold to be αTh, the set V of credible keypoints is composed of three parts: (1) when the Euclidean distance between the i’th pair of matching-keypoints and tracking-keypoints satisfies litlTht, keypoints Pait(x,y)V; (2) when lit>lTht, AKAZE match or KLT track may cause an error, lead to an excessively large deviation, so mistakenly deleted credible keypoints can be screened out by referring to similarity, namely if α(Mi,Mit)>αTh, matching-keypoints Pait(x,y)V; and (3) if α(Mi,Mit)>αTh, tracking-keypoints Pkit(x,y)V.

Bounding Box

The traditional ways to calculate the homographic matrix are statistical methods, such as RANSAC and LMEDS. However, experiments show that the estimation of homography gives poor results for nonplanar objects, even though the keypoint association was performed correctly.27 So, in this paper, the median method is put forward to compute object’s location, scale, and rotation in each frame.

As shown in Fig. 4, Pcenter(x,y) and Pcentert(x,y) represent the center of the initial template and the object’s bounding box in the t’th frame, respectively. Pi(x,y) and Pit(x,y) represent credible keypoints of the initial template and that in the t’th frame. θn and θnt represent the angle between the i and i+1 keypoints of the initial template and that in the t’th frame. dn and dnt, respectively, represent the Euclidean distance between the keypoints in the initial template and that in the t’th frame. With the following equations, the relative changing rate of position, scale and rotation angle can be calculated: Display Formula

dcentert(x,y)=median(Pit(x,y)Pi(x,y)),i[1,N],(5)
Display Formula
scentert=median(dnt/dn),n[1,(N1)!],(6)
Display Formula
θcentert=median(θntθn),n[1,(N1)!],(7)
where median represents the function of calculating median. Set the four vertices’ coordinates of initial tracking box as Pri(x,y),i=[1,4], its relative offset to the center of initial tracking box is Pdi(x,y), i=[1,4], in the t’th frame, the vertices’ coordinates of tracking box can be obtained by the following equations: Display Formula
Pcentert(x,y)=Pcenter(x,y)+dcentert(x,y),(8)
Display Formula
xrotatet=cosθcentert·xPdisinθcentert·yPdi,(9)
Display Formula
yrotatet=cosθcentert·yPdi+sinθcentert·xPdi,(10)
Display Formula
Prit(x,y)=Pcentert(x,y)+scentert·Protatet(xrotatet,yrotatet),i=[1,4],(11)
where xrotatet and yrotatet, respectively, represent the x-coordinate and y-coordinate after rotation. Prit(x,y) are the four vertices’ coordinates of tracking box in the t’th frame. The tracking box B=(b1,b2,bn) of each frame can be obtained through the calculation above.

Graphic Jump Location
Fig. 4
F4 :

The median method to get object’s location, scale, and rotation.

Algorithm Procedure

Given a sequence of images I1,,In and an initializing region b1 in   I1, our aim in each frame of the sequence is to recover the box of the object of interest. Steps of the AKT Algorithm 1 are as follows:

Table Grahic Jump Location
Algorithm 1:Fusing AKAZE-KLT tracking.

We evaluated the proposed tracking algorithm based on fusing AKAZE and KLT (AKT) algorithm using sequences, as supplied by Ref. 28, with challenging factors including partial occlusion, drastic illumination changes, nonrigid deformation, background clutter, and motion blur. We compared the proposed AKT tracker with seven state-of-the-art methods: tracking-learning-detection (TLD),14 compressive tracker (CT),29 context tracker (CXT),30 color-based probabilistic tracking (CPF),31 structured output tracking with kernels (Struck),32 multiple instance learning tracker (MIL)33 and the circulant structure of tracking with kernels (CSK).34 All data in the experimental results and the quantitative evaluation are based on the unified dataset and the same initial state conditions. Since our algorithm focuses primarily on the challenges of partial occlusion, deformation, rotation, and scale variation, we only include eight of the videos that mainly contain these challenges and neglect the others in the following discussions. Additionally, the results of precision and success rate are based on 22 videos, in which the good ones are as shown in Fig. 5 and Table 1. Experimental environment: Visual Studio 2013 + OpenCV3.1.0. Equipment is configured to: 2.00 GHz, dual processor, a 64-bit operating system, the 32-Gb installed memory.

Graphic Jump Location
Fig. 5
F5 :

The tracking results of AKT algorithm on different sequences: (a) FaceOcc1, (b) FaceOcc2, (c) Jogging1, (d) Jogging2, (e) Mhyang, (f) Sylvester, (g) Walking, and (h) Walking2.

Table Grahic Jump Location
Table 1The CLE and average frame per second (pixel/FPS).

There are a range of measures available in previous research for assessing the performance of tracking algorithms quantitatively. Many authors employ the center-error measure that expresses the distance between the centroid of the algorithmic output and the centroid of the ground truth. This measure is only a rough assessment of the localization. Since it is not bounded, the comparison of results obtained on different sequences is difficult. So, we also employed the widely used overlap measure Display Formula

o(bT,bGT)=bTbGTbTbGT,(12)
where bT is the tracker output and bGT refers to the manually annotated bounding box, represents union, namely, the overlap of bT, and bGT, represents intersection of these boxes. The overlap rate is a better indicator for per-frame success when bounded between 0 and 1.35

Since the rotation is not considered in the ground truth of the benchmarks, it is excluded in the overlap comparisons between our results and the benchmarks.

Accuracy Comparison of Methods for Tracking

The tracking performance of the AKT algorithm on different datasets28 is as shown in Fig. 5. Sequences (a) and (b) mainly contain the challenging aspect of partial occlusion. Sequences (c) and (d) mainly contain deformation. Sequences (e) and (f) mainly contain plane rotation and out-of-rotation. Sequences (g) and (h) mainly contain scale variation, and so on. The results show that facing different situations, the AKT algorithm can accurately track the object and has a very good robustness.

Although the AKT algorithm shows good tracking results in these videos, there are still some challenges that are hard to deal with. Since the AKT algorithm is based on keypoints, when the object’s appearance is smooth or the texture is not rich, it may struggle, as shown in Fig. 6(a). Also, when the object’s appearance is almost or totally changed, the tracking box may drift. For example, the initial object is the face, but when the person turns around, it is hard to track because of the changed appearance, as shown in Fig. 6(b).

Graphic Jump Location
Fig. 6
F6 :

The AKT algorithm suffers from texture less object and the changed appearance: (a) the tracking box is given falsely because of fewer keypoints and (b) the tracking box drifts because of the changed appearance.

Performance Comparison of Methods for Tracking

The center location error (CLE) and average frame per second (fps) of AKT algorithm and other seven kinds of tracking algorithms are shown in Table 1 (bold fonts indicate the best or second best performance), the results of the other seven kinds of tracking methods on different sequences in the table comes from Ref. 26. In Table 1, the results show that among the tracking on the eight datasets, the frame rate of AKT algorithm is 77.9 fps, showing a high real-time performance (the average fps comes in the top two 7 times), and achieving a high tracking accuracy with the average CLE of 11.9 pixels (the average CLE comes in the top two 5 times), the tracking performance is better than the other seven methods.

The CLE is defined as the average Euclidean distance between the center locations of the tracking boxes using our method and the manually labeled ground truths. Then the average CLE over all the frames of one sequence is used to summarize the overall performance for that sequence. Precision plot shows the percentage of frames whose estimated location is within the given threshold distance Tth of the ground truth, as shown in Fig. 7(a). The results show that precision of AKT tracking is higher than the other algorithms and similar to Struck.

Graphic Jump Location
Fig. 7
F7 :

(a) Precision and (b) success rate.

To measure the performance of success rate on a sequence of frames, we count the number of successful frames whose overlap o is larger than the given threshold Tth. The success plot shows the ratios of successful frames at the thresholds varied from 0 to 1, as shown in Fig. 7(b). The results show that AKT algorithm is superior to other algorithms.

Error Comparison of Methods for Homography Estimation

In order to evaluate the different methods for homography estimation, we developed our own dataset because the data supplied by Ref. 26 did not include rotation data. We gained a total of 200 frames randomly as original frames. Then we transformed these frames using the affine model, as shown in Eq. (13). Display Formula

[xy]=s[cosαsinαsinαcosα][xy]+[dxdy],(13)
where [xy]T represents the coordinate of a point in the original frame. [xy]T represents the coordinate of a point in the transformed frame. s, α, and [dxdy]T, respectively, represent scale, rotation, and displacement of the affine model. After transforming, we can get the dataset composed of original frames and transformed frames with known affine homography.

Then, under the condition that the keypoints of original frames and that of transformed frames are the same, we calculate the errors of displacement (pixel), scale (1) and rotation (deg) to get the error figures (method LMEDS in red, RANSAC in blue, MEDIAN in green), as shown in Fig. 8. The independent variable of error figures is the number of frames, whereas the dependent variable is the error.

Graphic Jump Location
Fig. 8
F8 :

Comparison results of methods for homography estimation: (a) similar accurate results for homography estimation, (b) LMEDS and RANSAC gives poor results while MEDIAN gives good result, (c) errors of x-coordinate displacement, (d) errors of y-coordinate displacement, (e) errors of scale, and (f) errors of rotation.

The average error (AE) is used for comparison as the first evaluation criterion, as shown in Table 2. There will be noises causing by obvious variable estimation error, so to make better comparison of the methods for homography estimation, we set up average error without noise (AEN) as the second evaluation criterion. From the error figures, we set 100 pixels as location noise threshold, 10 as scale noise threshold, 150 deg as rotation noise threshold. The lower the AE and AEN, the better the performance of method for homography estimation. The smaller the difference between AE and AEN, the more stable the method for homography estimation. Therefore, the experimental results show not only that the median method is more stable, not having apparent noises, but also that its value of AE and AEN is less than that of the traditional statistical method.

Table Grahic Jump Location
Table 2The AE and AEN of center location, scale, and rotation (pixel/1/deg).
Selection of Threshold for Tracking Results

The ratio of the number of inliers to the total number of matching-keypoints is called inlier ratio (IR). The larger the IR, the better the estimation of homographies. We impose that the error in location for two corresponding keypoints has to be less than 2.5 pixels, i.e., FbH(Fa)<2.5, where H is the true homography between the frames, Fa is the location of keypoint a in original frame F, and Fb is the location of keypoint b in transformed frame Fb. The keypoint meeting above condition is called inlier. To find the threshold for better tracking, we still use the dataset put forward in Sec. 4.3 with the total number changed to 2000. We calculate the IR of these corresponding frames and the mean of IR is 0.74, as shown in Fig. 9. Therefore, we set optimal value lTht for tracking as the mean of IR to avoid outliers.

In this paper, in an effort to reduce an excess of outliers when using traditional AKAZE match-tracking algorithm and solve the problems caused by poor homography estimates produced by statistical methods, AKT algorithm is put forward. The experimental results on different datasets show that the AKT algorithm can deal with challenges, such as partial occlusion, deformation, scale variation, rotation, and background clutter, showing high real-time performance and accuracy. However, since the tracking method used is based on keypoints, when the objects appearance is smooth, and texture is not rich, using the AKT algorithm may result in reduction of the effectiveness of tracking. Therefore, in future work, we will address the problems mentioned above.

This work was supported by the National Natural Science Foundation of China (Grant No. 61471194), Science and Technology on Electro-optic Control Laboratory and Aeronautical Science Foundation of China (Grant No. 20135152049), CASC (Grant No. China Aerospace Science and Technology Corporation) Aerospace Science and Technology Innovation Foundation Project; the Fundamental Research Funds for the Central Universities; Nanjing University of Aeronautics and Astronautics Graduate School Innovation Base (Laboratory) Open Foundation Program (Grant No. kfjj20151505).

Yilmaz  A., , Javed  O., and Shah  M., “Object tracking: a survey,” ACM Comput. Surv.. 38, (4 ), 13  (2006). 0360-0300 CrossRef
Cannons  K., “A review of visual tracking,” Technical Report CSE-2008-07,  Department of Computer Science Engineering, York University ,  Toronto, Canada  (2008).
Maggio  E., and Cavallaro  A., Video Tracking: Theory and Practice. ,  Wiley Online Library ,  Hoboken, New Jersey  (2011).
Lee  T. K.  et al., “Reliable tracking algorithm for multiple reference frame motion estimation,” J. Electron. Imaging. 20, (3 ), 033003  (2011). 1017-9909 CrossRef
Smeulders  A. W.  et al., “Visual tracking: an experimental survey,” IEEE Trans. Pattern Anal. Mach. Intell.. 36, (7 ), 1442 –1468 (2014). 0162-8828 CrossRef
Junhua  Y.  et al., “Real-time tracking of targets with complex state based on ICT algorithm,” J. Huazhong Univ. Sci. Technol. (Natural Sci. Ed.). 43, (3 ), 107 –112 (2015).
Saffari  A.  et al., “On-line random forests,” in  IEEE 12th Int. Conf. on Computer Vision Workshops , pp. 1393 –1400 (2009).CrossRef
Babenko  B., , Yang  M. H., and Belongie  S., “Robust object tracking with online multiple instance learning,” IEEE Trans. Pattern Anal. Mach. Intell.. 33, (8 ), 1619 –1632 (2011). 0162-8828 CrossRef
Sand  P., and Teller  S., “Particle video: long-range motion estimation using point trajectories,” Int. J. Comput. Vision. 80, (1 ), 72 –91 (2008). 0920-5691 CrossRef
Nebehay  G., and Pflugfelder  R., “Consensus-based matching and tracking of keypoints for object tracking,” in  IEEE Winter Conf. on Applications of Computer Vision , pp. 862 –869,  IEEE  (2014).CrossRef
Bibby  C., and Reid  I., “Robust real-time visual tracking using pixel-wise posteriors,” in  European Conf. on Computer Vision  (2008).
Bibby  C., and Reid  I., “Real-time tracking of multiple occluding objects using level sets,” in  IEEE Conf. on Computer Vision and Pattern Recognition , pp. 1307 –1314 (2010).CrossRef
Brox  T.  et al., “High accuracy optical flow estimation based on a theory for warping,” in  European Conf. on Computer Vision , pp. 25 –36 (2004).
Kalal  Z., , Mikolajczyk  K., and Matas  J., “Tracking-learning-detection,” IEEE Trans. Pattern Anal. Mach. Intell.. 34, (7 ), 1409 –1422 (2012).CrossRef
Ramanan  D., , Forsyth  D. A., and Zisserman  A., “Tracking people by learning their appearance,” IEEE Trans. Pattern Anal. Mach. Intell.. 29, (1 ), 65 –81 (2007).CrossRef
Buehler  P.  et al., “Long term arm and hand tracking for continuous sign language TV broadcasts,” in  British Machine Vision Conf.  (2008).
Adam  A., , Rivlin  E., and Shimshoni  I., “Robust fragments-based tracking using the integral histogram,” in  IEEE Computer Society Conf. on Computer Vision and Pattern Recognition , pp. 798 –805,  IEEE  (2006).CrossRef
Nejhum  S. M. S., , Ho  J., and Yang  M. H., “Online visual tracking with histograms and articulating blocks,” Comput. Vision Image Understanding. 114, (8 ), 901 –914 (2010). 1077-3142 CrossRef
Lowe  D. G., “Distinctive image features from scale-invariant keypoints,” Int. J. Comput. Vision. 60, (2 ), 91 –110 (2004). 0920-5691 CrossRef
Bay  H., , Tuytelaars  T., and Van Gool  L., “SURF: speeded up robust features,” in  European Conf. on Computer Vision , pp. 404 –417,  Springer ,  Berlin Heidelberg  (2006).
Rublee  E.  et al., “ORB: an efficient alternative to SIFT or SURF,” in  Int. Conf. on Computer Vision , pp. 2564 –2571 (2011).
Alcantarilla  P., , Nuevo  J., and Bartoli  A., “Fast explicit diffusion for accelerated features in nonlinear scale spaces,” In Proc. British Machine Vision Conf.. , 1 –11 (2013). 0162-8828 
Grewenig  S., , Weickert  J., and Bruhn  A., “From box filtering to fast explicit diffusion,” in Pattern Recognition. , pp. 533 –542,  Springer ,  Berlin Heidelberg  (2010).
Yang  X., and Cheng  K. T., “LDB: an ultra-fast feature for scalable augmented reality on mobile devices,” in  IEEE Int. Symp. on Mixed and Augmented Reality , pp. 49 –57,  IEEE  (2012).CrossRef
Alcantarilla  P. F., , Bartoli  A., and Davison  A. J., “KAZE features,” in  European Conf. on Computer Vision,  pp. 214 –227,  Springer ,  Berlin Heidelberg  (2012).
Kalal  Z., , Mikolajczyk  K., and Matas  J., “Forward-backward error: automatic detection of tracking failures,” in  20th Int. Conf. on Pattern Recognition , pp. 2756 –2759,  IEEE  (2010).CrossRef
Nebehay  G., and Pflugfelder  R., “Consensus-based matching and tracking of keypoints for object tracking,” in  IEEE Winter Conf. on Applications of Computer Vision , pp. 862 –869 (2014).CrossRef
Wu  Y., , Lim  J., and Yang  M. H., “Online object tracking: a benchmark,” in  Computer Vision and Pattern Recognition , pp. 2411 –2418 (2013).CrossRef
Zhang  K., , Zhang  L., and Yang  M. H., “Real-time compressive tracking,” in  European Conf. on Computer Vision , pp. 864 –877,  Springer ,  Firenze, Italy  (2012).
Dinh  T. B., , Vo  N., and Medioni  G., “Context tracker: exploring supporters and distracters in unconstrained environments,” in  IEEE Conf. on Computer Vision and Pattern Recognition , pp. 1177 –1184 (2011).CrossRef
Pérez  P.  et al., “Color-based probabilistic tracking,” in  European Conf. on Computer Vision , pp. 661 –675,  Springer ,  Berlin Heidelberg  (2002).
Hare  S., , Amir  S., and Torr  P. H., “Struck: Structured output tracking with kernels,” in  Int. Conf. on Computer Vision , pp. 263 –270 (2011).
Babenko  B., , Yang  M. H., and Belongie  S., “Visual tracking with online multiple instance learning,” in  IEEE Conf. on Computer Vision and Pattern Recognition , pp. 983 –990 (2009).CrossRef
Henriques  J. F.  et al., “Exploiting the circulant structure of tracking-by-detection with kernels,” in  European Conf. on Computer Vision , pp. 702 –715,  Springer ,  Berlin Heidelberg  (2012).
Hemery  B., , Laurent  H., and Rosenberger  C., “Comparative study of metrics for evaluation of object localisation by bounding boxes,” in  Fourth Int. Conf. on Image and Graphics , pp. 459 –464,  IEEE  (2007).CrossRef

Junhua Yan is an assistant professor at Nanjing University of Aeronautics and Astronautics, a visiting researcher in Science and Technology on Electro-Optic Control Laboratory. She received her BSc, MSc, and PhD degrees from Nanjing University of Aeronautics and Astronautics in 1993, 2001, and 2004, respectively. She is the author of more than 30 journal papers and has 5 patents. Her current research interests include multisource information fusion, and target detection, tracking, and recognition.

Zhigang Wang received his BSc degree from Nanjing University of Aeronautics and Astronautics in 2013. Now, he is a MSc degree candidate at Nanjing University of Aeronautics and Astronautics. His main research direction is object detection and tracking.

Shunfei Wang received his BSc degree from Nanjing University of Aeronautics and Astronautics in 2014. Now, he is a MSc degree candidate at Nanjing University of Aeronautics and Astronautics. His main research direction is object detection and tracking.

© 2016 The Authors

Citation

Junhua Yan ; Zhigang Wang and Shunfei Wang
"Real-time tracking of deformable objects based on combined matching-and-tracking", J. Electron. Imaging. 25(2), 023011 (Mar 28, 2016). ; http://dx.doi.org/10.1117/1.JEI.25.2.023011


Figures

Graphic Jump Location
Fig. 1
F1 :

Binary test: (a) LDB and (b) M-LDB.

Graphic Jump Location
Fig. 2
F2 :

Forward–backward error in two adjacent frames.

Graphic Jump Location
Fig. 4
F4 :

The median method to get object’s location, scale, and rotation.

Graphic Jump Location
Fig. 5
F5 :

The tracking results of AKT algorithm on different sequences: (a) FaceOcc1, (b) FaceOcc2, (c) Jogging1, (d) Jogging2, (e) Mhyang, (f) Sylvester, (g) Walking, and (h) Walking2.

Graphic Jump Location
Fig. 6
F6 :

The AKT algorithm suffers from texture less object and the changed appearance: (a) the tracking box is given falsely because of fewer keypoints and (b) the tracking box drifts because of the changed appearance.

Graphic Jump Location
Fig. 7
F7 :

(a) Precision and (b) success rate.

Graphic Jump Location
Fig. 8
F8 :

Comparison results of methods for homography estimation: (a) similar accurate results for homography estimation, (b) LMEDS and RANSAC gives poor results while MEDIAN gives good result, (c) errors of x-coordinate displacement, (d) errors of y-coordinate displacement, (e) errors of scale, and (f) errors of rotation.

Tables

Table Grahic Jump Location
Algorithm 1:Fusing AKAZE-KLT tracking.
Table Grahic Jump Location
Table 1The CLE and average frame per second (pixel/FPS).
Table Grahic Jump Location
Table 2The AE and AEN of center location, scale, and rotation (pixel/1/deg).

References

Yilmaz  A., , Javed  O., and Shah  M., “Object tracking: a survey,” ACM Comput. Surv.. 38, (4 ), 13  (2006). 0360-0300 CrossRef
Cannons  K., “A review of visual tracking,” Technical Report CSE-2008-07,  Department of Computer Science Engineering, York University ,  Toronto, Canada  (2008).
Maggio  E., and Cavallaro  A., Video Tracking: Theory and Practice. ,  Wiley Online Library ,  Hoboken, New Jersey  (2011).
Lee  T. K.  et al., “Reliable tracking algorithm for multiple reference frame motion estimation,” J. Electron. Imaging. 20, (3 ), 033003  (2011). 1017-9909 CrossRef
Smeulders  A. W.  et al., “Visual tracking: an experimental survey,” IEEE Trans. Pattern Anal. Mach. Intell.. 36, (7 ), 1442 –1468 (2014). 0162-8828 CrossRef
Junhua  Y.  et al., “Real-time tracking of targets with complex state based on ICT algorithm,” J. Huazhong Univ. Sci. Technol. (Natural Sci. Ed.). 43, (3 ), 107 –112 (2015).
Saffari  A.  et al., “On-line random forests,” in  IEEE 12th Int. Conf. on Computer Vision Workshops , pp. 1393 –1400 (2009).CrossRef
Babenko  B., , Yang  M. H., and Belongie  S., “Robust object tracking with online multiple instance learning,” IEEE Trans. Pattern Anal. Mach. Intell.. 33, (8 ), 1619 –1632 (2011). 0162-8828 CrossRef
Sand  P., and Teller  S., “Particle video: long-range motion estimation using point trajectories,” Int. J. Comput. Vision. 80, (1 ), 72 –91 (2008). 0920-5691 CrossRef
Nebehay  G., and Pflugfelder  R., “Consensus-based matching and tracking of keypoints for object tracking,” in  IEEE Winter Conf. on Applications of Computer Vision , pp. 862 –869,  IEEE  (2014).CrossRef
Bibby  C., and Reid  I., “Robust real-time visual tracking using pixel-wise posteriors,” in  European Conf. on Computer Vision  (2008).
Bibby  C., and Reid  I., “Real-time tracking of multiple occluding objects using level sets,” in  IEEE Conf. on Computer Vision and Pattern Recognition , pp. 1307 –1314 (2010).CrossRef
Brox  T.  et al., “High accuracy optical flow estimation based on a theory for warping,” in  European Conf. on Computer Vision , pp. 25 –36 (2004).
Kalal  Z., , Mikolajczyk  K., and Matas  J., “Tracking-learning-detection,” IEEE Trans. Pattern Anal. Mach. Intell.. 34, (7 ), 1409 –1422 (2012).CrossRef
Ramanan  D., , Forsyth  D. A., and Zisserman  A., “Tracking people by learning their appearance,” IEEE Trans. Pattern Anal. Mach. Intell.. 29, (1 ), 65 –81 (2007).CrossRef
Buehler  P.  et al., “Long term arm and hand tracking for continuous sign language TV broadcasts,” in  British Machine Vision Conf.  (2008).
Adam  A., , Rivlin  E., and Shimshoni  I., “Robust fragments-based tracking using the integral histogram,” in  IEEE Computer Society Conf. on Computer Vision and Pattern Recognition , pp. 798 –805,  IEEE  (2006).CrossRef
Nejhum  S. M. S., , Ho  J., and Yang  M. H., “Online visual tracking with histograms and articulating blocks,” Comput. Vision Image Understanding. 114, (8 ), 901 –914 (2010). 1077-3142 CrossRef
Lowe  D. G., “Distinctive image features from scale-invariant keypoints,” Int. J. Comput. Vision. 60, (2 ), 91 –110 (2004). 0920-5691 CrossRef
Bay  H., , Tuytelaars  T., and Van Gool  L., “SURF: speeded up robust features,” in  European Conf. on Computer Vision , pp. 404 –417,  Springer ,  Berlin Heidelberg  (2006).
Rublee  E.  et al., “ORB: an efficient alternative to SIFT or SURF,” in  Int. Conf. on Computer Vision , pp. 2564 –2571 (2011).
Alcantarilla  P., , Nuevo  J., and Bartoli  A., “Fast explicit diffusion for accelerated features in nonlinear scale spaces,” In Proc. British Machine Vision Conf.. , 1 –11 (2013). 0162-8828 
Grewenig  S., , Weickert  J., and Bruhn  A., “From box filtering to fast explicit diffusion,” in Pattern Recognition. , pp. 533 –542,  Springer ,  Berlin Heidelberg  (2010).
Yang  X., and Cheng  K. T., “LDB: an ultra-fast feature for scalable augmented reality on mobile devices,” in  IEEE Int. Symp. on Mixed and Augmented Reality , pp. 49 –57,  IEEE  (2012).CrossRef
Alcantarilla  P. F., , Bartoli  A., and Davison  A. J., “KAZE features,” in  European Conf. on Computer Vision,  pp. 214 –227,  Springer ,  Berlin Heidelberg  (2012).
Kalal  Z., , Mikolajczyk  K., and Matas  J., “Forward-backward error: automatic detection of tracking failures,” in  20th Int. Conf. on Pattern Recognition , pp. 2756 –2759,  IEEE  (2010).CrossRef
Nebehay  G., and Pflugfelder  R., “Consensus-based matching and tracking of keypoints for object tracking,” in  IEEE Winter Conf. on Applications of Computer Vision , pp. 862 –869 (2014).CrossRef
Wu  Y., , Lim  J., and Yang  M. H., “Online object tracking: a benchmark,” in  Computer Vision and Pattern Recognition , pp. 2411 –2418 (2013).CrossRef
Zhang  K., , Zhang  L., and Yang  M. H., “Real-time compressive tracking,” in  European Conf. on Computer Vision , pp. 864 –877,  Springer ,  Firenze, Italy  (2012).
Dinh  T. B., , Vo  N., and Medioni  G., “Context tracker: exploring supporters and distracters in unconstrained environments,” in  IEEE Conf. on Computer Vision and Pattern Recognition , pp. 1177 –1184 (2011).CrossRef
Pérez  P.  et al., “Color-based probabilistic tracking,” in  European Conf. on Computer Vision , pp. 661 –675,  Springer ,  Berlin Heidelberg  (2002).
Hare  S., , Amir  S., and Torr  P. H., “Struck: Structured output tracking with kernels,” in  Int. Conf. on Computer Vision , pp. 263 –270 (2011).
Babenko  B., , Yang  M. H., and Belongie  S., “Visual tracking with online multiple instance learning,” in  IEEE Conf. on Computer Vision and Pattern Recognition , pp. 983 –990 (2009).CrossRef
Henriques  J. F.  et al., “Exploiting the circulant structure of tracking-by-detection with kernels,” in  European Conf. on Computer Vision , pp. 702 –715,  Springer ,  Berlin Heidelberg  (2012).
Hemery  B., , Laurent  H., and Rosenberger  C., “Comparative study of metrics for evaluation of object localisation by bounding boxes,” in  Fourth Int. Conf. on Image and Graphics , pp. 459 –464,  IEEE  (2007).CrossRef

Some tools below are only available to our subscribers or users with an online account.

Related Content

Customize your page view by dragging & repositioning the boxes below.

Related Book Chapters

Topic Collections

PubMed Articles
Advertisement
  • Don't have an account?
  • Subscribe to the SPIE Digital Library
  • Create a FREE account to sign up for Digital Library content alerts and gain access to institutional subscriptions remotely.
Access This Article
Sign in or Create a personal account to Buy this article ($20 for members, $25 for non-members).
Access This Proceeding
Sign in or Create a personal account to Buy this article ($15 for members, $18 for non-members).
Access This Chapter

Access to SPIE eBooks is limited to subscribing institutions and is not available as part of a personal subscription. Print or electronic versions of individual SPIE books may be purchased via SPIE.org.