Shreyamsha Kumar, B. K. ORCID: https://orcid.org/0000-0002-3781-0635, Swamy, M.N.S. ORCID: https://orcid.org/0000-0002-3989-5476 and Ahmad, M. Omair ORCID: https://orcid.org/0000-0002-2924-6659 (2018) Visual Tracking Based on Correlation Filter and Robust Coding in Bilateral 2DPCA Subspace. IEEE Access, 6 (1). pp. 73052-73067. ISSN 2169-3536
Preview |
Text (Publisher's Version) (application/pdf)
5MBPublished Paper-08537877.pdf - Published Version Available under License Spectrum Terms of Access. |
Official URL: https://doi.org/10.1109/ACCESS.2018.2881723
Abstract
The success of correlation filters in visual tracking has attracted much attention in computer vision due to their high efficiency and performance. However, they are not equipped with a mechanism to cope with challenging situations like scale variations, out-of-view, and camera motion. With the aim of dealing with such situations, a collaborative scheme of tracking based on the discriminative and generative models is proposed. Instead of finding all the affine motion parameters of the target by the combined likelihood of these models, the correlation filters, based on discriminative model, are used to find the position of the target, whereas 2D robust coding in a bilateral 2DPCA subspace, based on generative model, is used to find the other affine motion parameters of the target. Further, a 2D robust coding distance is proposed to differentiate the candidate samples from the subspace and used to compute the observation likelihood in the generative model. In addition, it is proposed to generate a robust occlusion map from the weights obtained during the residual minimization and a novel update mechanism of the appearance model for both the correlation filters and bilateral 2DPCA subspace is proposed. The proposed method is evaluated on the challenging image sequences available in the OTB-50, VOT2016, and UAV20L benchmark datasets, and its performance is compared with that of the state-of-the-art tracking algorithms. In contrast to OTB-50 and VOT2016, the dataset UAV20L contains long duration sequences with additional challenges introduced by both the camera motion and the view points in three dimensions. Quantitative and qualitative performance evaluations on three benchmark datasets demonstrate that the proposed tracking algorithm outperforms the state-of-the-art methods.
Divisions: | Concordia University > Gina Cody School of Engineering and Computer Science > Electrical and Computer Engineering |
---|---|
Item Type: | Article |
Refereed: | Yes |
Authors: | Shreyamsha Kumar, B. K. and Swamy, M.N.S. and Ahmad, M. Omair |
Journal or Publication: | IEEE Access |
Date: | 19 December 2018 |
Funders: |
|
Digital Object Identifier (DOI): | 10.1109/ACCESS.2018.2881723 |
Keywords: | Visual tracking, weighted least squares, principle component analysis (PCA), bilateral 2DPCA (B2DPCA), occlusion map, correlation filters. |
ID Code: | 984781 |
Deposited By: | M. OMAIR AHMAD |
Deposited On: | 21 Dec 2018 18:07 |
Last Modified: | 21 Dec 2018 18:07 |
References:
[1] A. Yilmaz, O. Javed, and M. Shah, "Object tracking: A survey,'' ACM Comput. Surv., vol. 38, no. 4, pp. 1-45, 2006.[2] H. Yang, L. Shao, F. Zheng, L. Wang, and Z. Song, "Recent advances and trends in visual tracking: A review,'' Neurocomputing, vol. 74, no. 18, pp. 3823-3831, Nov. 2011.
[3] D. Wang, H. Lu, and M.-H. Yang, "Online object tracking with sparse prototypes,'' IEEE Trans. Image Process., vol. 22, no. 1, pp. 314-325, Jan. 2013.
[4] Y. Wu, J. Lim, and M.-H. Yang, "Online object tracking: A benchmark,'' in Proc. IEEE Comput. Soc. Conf. Comput. Vis. Pattern Recognit. (CVPR), Jun. 2013, pp. 2411-2418.
[5] D. Comaniciu, V. Ramesh, and P. Meer, "Kernel-based object tracking,'' IEEE Trans. Pattern Anal. Mach. Intell., vol. 25, no. 5, pp. 564-577, May 2003.
[6] X. Mei and H. Ling, "Robust visual tracking using L1-minimization,'' in Proc. IEEE Int. Conf. Comput. Vis. (ICCV), Sep./Oct. 2009, pp. 1436-1443.
[7] C. Bao, Y. Wu, H. Ling, and H. Ji, "Real time robust L1-tracker using accelerated proximal gradient approach,'' in Proc. IEEE Conf. Comput. Vis. Pattern Recogn. (CVPR), Jun. 2012, pp. 1830-1837.
[8] B. K. S.Kumar, M. N. S. Swamy, and M. O. Ahmad, "Structural local DCT sparse appearance model for visual tracking,'' in Proc. IEEE Int. Symp. Circuits Syst. (ISCAS), May 2015, pp. 1194-1197.
[9] X. Jia, H. Lu, and M.-H. Yang, "Visual tracking via adaptive structural local sparse appearance model,'' in Proc. IEEE Comput. Soc. Conf. Comput. Vis. Pattern Recognit. (CVPR), Jun. 2012, pp. 1822-1829.
[10] A. Adam, E. Rivlin, and I. Shimshoni, "Robust fragments-based tracking using the integral histogram,'' in Proc. IEEE Conf. Comput. Vis. Pattern Recognit., New York, NY, USA, Jun. 2006, pp. 798-805.
[11] J. Yang, R. Xu, J. Cui, and Z. Ding, "Robust visual tracking using adaptive local appearance model for smart transportation,'' Multimedia Tools Appl., vol. 75, no. 24, pp. 17487-17500, Dec. 2016.
[12] B. K. S. Kumar, M. N. S. Swamy, and M. O. Ahmad, "Visual tracking using structural local DCT sparse appearance model with occlusion detection,'' Multimedia Tools and Applications. Cham, Switzerland: Springer, 2018, pp. 1-24.
[13] D. A. Ross, J. Lim, R.-S. Lin, and M.-H. Yang, "Incremental learning for robust visual tracking,'' Int. J. Comput. Vis., vol. 77, no. 1-3, pp. 125-141, 2008.
[14] B. K. S. Kumar, M. N. S. Swamy, and M. O. Ahmad, "Weighted residual minimization in PCA subspace for visual tracking,'' in Proc. IEEE Int. Symp. Circuits Syst. (ISCAS), May 2016, pp. 986-989.
[15] H. Wang, H. Ge, and S. Zhang, "Object tracking via 2DPCA and L2-regularization,'' J. Elect. Comput. Eng., vol. 2016, Jul. 2016, Art. no. 7975951.
[16] D. Wang and H. Lu, "Object tracking via 2DPCA and L1-regularization,'' IEEE Signal Process. Lett., vol. 19, no. 11, pp. 711-714, Nov. 2012.
[17] B. K. S. Kumar, M. N. S. Swamy, and M. O. Ahmad, "Visual tracking via bilateral 2DPCA and robust coding,'' in Proc. IEEE Can. Conf. Elect. Comput. Eng. (CCECE), May 2016, pp. 1-4.
[18] P. Qu, "Visual tracking with fragments-based PCA sparse representation,'' Int. J. Signal Process., Image Process. Pattern Recognit., vol. 7, no. 2, pp. 23-34, Feb. 2014.
[19] M. Sun, D. Du, H. Lu, and L. Zhang, "Visual tracking with a structured local model,'' in Proc. IEEE Int. Conf. Image Process. (ICIP), Sep. 2015, pp. 2855-2859.
[20] B. Babenko, M.-H. Yang, and S. Belongie, "Visual tracking with online multiple instance learning,'' in Proc. CVPR, Jun. 2009, pp. 983-990.
[21] H. Grabner, C. Leistner, and H. Bischof, "Semi-supervised on-line boosting for robust tracking,'' in Proc. Eur.Conf. Comput. Vis. (ECCV), Oct. 2008, pp. 234-247.
[22] F. Wang, J. Zhang, Q. Guo, P. Liu, and D. Tu, "Robust visual tracking via discriminative structural sparse feature,'' in Proc. Chin. Conf. Image Graph. Technol., Jun. 2015, pp. 438-446.
[23] W. Zhong, H. Lu, and M.-H. Yang, "Robust object tracking via sparse collaborative appearance model,'' IEEE Trans. Image Process., vol. 23, no. 5, pp. 2356-2368, May 2014.
[24] C. Xie, J. Tan, P. Chen, J. Zhang, and L. He, "Collaborative object tracking model with local sparse representation,'' J. Vis. Commun. Image Represent., vol. 25, no. 2, pp. 423-434, 2014.
[25] B. Zhuang, L. Wang, and H. Lu, "Visual tracking via shallow and deep collaborative model,'' Neurocomputing, vol. 218, pp. 61-71, Dec. 2016.
[26] H. Zhang, F. Tao, and G. Yang, "Robust visual tracking based on structured sparse representation model,'' Multimedia Tools Appl., vol. 74, no. 3, pp. 1021-1043, 2015.
[27] C. Ma, J. Huang, X. Yang, and M. Yang, "Hierarchical convolutional features for visual tracking,'' in Proc. IEEE Int. Conf. Comput. Vis. (ICCV), Jun. 2015, pp. 3074-3082.
[28] Y. Qi, S. Zhang, L. Qin, H. Yao, Q. Huang, J. Lim, and M.-H. Yang, "Hedged deep tracking,'' in Proc. IEEE Conf. Comput. Vis. Pattern Recognit., Jun. 2016, pp. 4303-4311.
[29] M. Danelljan, A. Robinson, F. S. Khan, and M. Felsberg, "Beyond correlation filters: Learning continuous convolution operators for visual tracking,'' in Proc. Eur. Conf. Comput. Vis. (ECCV), Oct. 2016, pp. 472-488.
[30] H. Nam and B. Han, "Learning multi-domain convolutional neural networks for visual tracking,'' in Proc. IEEE Conf. Comput. Vis. Pattern Recognit., Jun. 2016, pp. 4293-4302.
[31] D. Bolme, J. Beveridge, B. Draper, and Y. Lui, "Visual object tracking using adaptive correlation filters,'' in Proc. Int. Conf. Comput. Vis. Pattern Recognit., Sep. 2010, pp. 2544-2550.
[32] J. F. Henriques, R. Caseiro, P. Martins, and J. Batista, "Exploiting the circulant structure of tracking-by-detection with kernels,'' in Proc. Eur. Conf. Comput. Vis. (ECCV), Oct. 2012, pp. 702-715.
[33] J. F. Henriques, R. Caseiro, P. Martins, and J. Batista, "High-speed tracking with kernelized correlation filters,'' IEEE Trans. Pattern Anal. Mach. Intell., vol. 37, no. 3, pp. 583-596, Mar. 2015.
[34] M. Danelljan, F. S. Khan, M. Felsberg, and J. van de Weijer, "Adaptive color attributes for real-time visual tracking,'' in Proc. IEEE Conf. Comput. Vis. Pattern Recogn. (CVPR), Jun. 2014, pp. 1090-1097.
[35] M. Danelljan, G. Häger, F. Khan, and M. Felsberg, "Accurate scale estimation for robust visual tracking,'' in Proc. Brit. Mach. Vis. Conf. (BMVC), Sep. 2014, pp. 1-11.
[36] T. Zhang, A. Bibi, and B. Ghanem, "In defense of sparse tracking: Circulant sparse tracker,'' in Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), Jun. 2016, pp. 3880-3888.
[37] T. Wang, I. Y. H. Gu, and P. Shi, "Object tracking using incremental 2D-PCA learning and ML estimation,'' in Proc. IEEE Int. Conf. Acoust., Speech Signal Process. (ICASSP), Apr. 2007, pp. I-933-I-936.
[38] H. Kong, L. Wang, E. K. Teoh, X. Li, J.-G. Wang, and R. Venkateswarlu, "Generalized 2D principal component analysis for face image representation and recognition,'' Neural Netw., vol. 18, nos. 5-6, pp. 585-594, Jul. 2005.
[39] M.-X. Jiang, M. Li, and H.-Y. Wang, "Visual object tracking based on 2DPCA and ML,'' Math. Problems Eng., vol. 2013, May 2013, Art. no. 404978.
[40] M. Yang, L. Zhang, J. Yang, and D. Zhang, "Robust sparse coding for face recognition,'' in Proc. IEEE Conf. Comput. Vis. Pattern Recogn. (CVPR), Jun. 2011, pp. 625-632.
[41] J. Yan and M. Tong, "Weighted sparse coding residual minimization for visual tracking,'' in Proc. Vis. Commun. Image Process. (VCIP), Nov. 2011, pp. 1-4.
[42] M. Jiang, H. Wang, and B. Wang, "Robust visual tracking based on maximum likelihood estimation,'' Int. J. Digit. Content Technol. Appl., vol. 6, no. 22, pp. 467-474, Dec. 2012.
[43] S. A. Siena, "Improving the design and use of correlation filters in visual tracking,'' Ph.D. dissertation, Dept. Elect. Comput. Eng., Carnegie Mellon Univ., Pittsburgh, PA, USA, 2017.
[44] M. Isard and A. Blake, "CONDENSATION: Conditional density propagation for visual tracking,'' Int. J. Comput. Vis., vol. 29, no. 1, pp. 5-28, Aug. 1998.
[45] M. J. Black and A. D. Jepson, "EigenTracking: Robust matching and tracking of articulated objects using a view-based representation,'' Int. J. Comput. Vis., vol. 26, no. 1, pp. 63-84, 1998.
[46] D. Wang, H. Lu, and M.-H. Yang, "Robust visual tracking via least softthreshold squares,'' IEEE Trans. Circuits Syst. Video Technol., vol. 26, no. 9, pp. 1709-1721, Sep. 2016.
[47] M. Kristan, A. Leonardis, J. Matas, M. Felsberg, and R. P�ugfelder, "The visual object tracking VOT2016 challenge results,'' in Proc. Eur. Conf. Comput. Vis. (ECCV), Oct. 2016, pp. 777-823.
[48] M. Mueller, N. Smith, and B. Ghanem, "A benchmark and simulator for UAV tracking,'' in Proc. Eur. Conf. Comput. Vis. (ECCV), Oct. 2016, pp. 445-461.
[49] Y. Sui, Y. Tang, L. Zhang, and G. Wang, "Visual tracking via subspace learning: A discriminative approach,'' Int. J. Comput. Vis., vol. 126, no. 5, pp. 515-536, 2018.
[50] D. Wang, H. Lu, and C. Bo, "Visual tracking via weighted local cosine similarity,'' IEEE Trans. Cybern., vol. 45, no. 9, pp. 1838-1850, Sep. 2015.
[51] D. Wang, H. Lu, Z. Xiao, and M.-H. Yang, "Inverse sparse tracker with a locally weighted distance metric,'' IEEE Trans. Image Process., vol. 24, no. 9, pp. 2646-2657, Sep. 2015.
[52] D.Wang, H. Lu, and C. Bo, "Fast and robust object tracking via probability continuous outlier model,'' IEEE Trans. Image Process., vol. 24, no. 12, pp. 5166-5176, Dec. 2015.
[53] N. Wang and D.-Y. Yeung, "Learning a deep compact image representation for visual tracking,'' in Proc. Adv. Neural Inf. Process. Syst., 2013, pp. 809-817.
[54] Y. Sui, G.Wang, L. Zhang, and M.-H. Yang, "Exploiting spatial-temporal locality of tracking via structured dictionary learning,'' IEEE Trans. Image Process., vol. 27, no. 3, pp. 1282-1296, Mar. 2018.
[55] Y. Sui and L. Zhang, "Visual tracking via locally structured Gaussian process regression,'' IEEE Signal Process. Lett., vol. 22, no. 9, pp. 1331-1335, Sep. 2015.
[56] Y. Sui, Y. Tang, and L. Zhang, "Discriminative low-rank tracking,'' in Proc. IEEE Int. Conf. Comput. Vis., Dec. 2015, pp. 3002-3010.
Repository Staff Only: item control page