Login | Register

Visual Tracking Algorithms using Different Object Representation Schemes

Title:

Visual Tracking Algorithms using Different Object Representation Schemes

Bidare Kantharajappa, Shreyamsha Kumar ORCID: https://orcid.org/0000-0002-3781-0635 (2019) Visual Tracking Algorithms using Different Object Representation Schemes. PhD thesis, Concordia University.

[thumbnail of Bidare Kantharajappa_PhD_F2019.pdf]
Preview
Text (application/pdf)
Bidare Kantharajappa_PhD_F2019.pdf - Accepted Version
Available under License Spectrum Terms of Access.
30MB

Abstract

Visual tracking, being one of the fundamental, most important and challenging areas in computer vision, has attracted much attention in the research community during the past decade due to its broad range of real-life applications. Even after three decades of research, it still remains a challenging problem in view of the complexities involved in the target searching due to intrinsic and extrinsic appearance variations of the object. The existing trackers fail to track the object when there are considerable amount of object appearance variations and when the object undergoes severe occlusion, scale change, out-of-plane rotation, motion blur, fast motion, in-plane rotation, out-of-view and illumination variation either individually or simultaneously. In order to have a reliable and improved tracking performance, the appearance variations should be handled carefully such that the appearance model should adapt to the intrinsic appearance variations and be robust enough for extrinsic appearance variations. The objective of this thesis is to develop visual object tracking algorithms by addressing the deficiencies of the existing algorithms to enhance the tracking performance by investigating the use of different object representation schemes to model the object appearance and then devising mechanisms to update the observation models.

A tracking algorithm based on the global appearance model using robust coding and its collaboration with a local model is proposed. The global PCA subspace is used to model the global appearance of the object, and the optimum PCA basis coefficients and the global weight matrix are estimated by developing an iteratively reweighted robust coding (IRRC) technique. This global model is collaborated with the local model to exploit their individual merits. Global and local robust coding distances are introduced to find the candidate sample having similar appearance as that of the reconstructed sample from the subspace, and these distances are used to define the observation likelihood. A robust occlusion map generation scheme and a mechanism to update both the global and local observation models are developed. Quantitative and qualitative performance evaluations on OTB-50 and VOT2016, two popular benchmark datasets, demonstrate that the proposed algorithm with histogram of oriented gradient (HOG) features generally performs better than the state-of-the-art methods considered do. In spite of its good performance, there is a need to improve the tracking performance in some of the challenging attributes of OTB-50 and VOT2016.

A second tracking algorithm is developed to provide an improved performance in situations for the above mentioned challenging attributes. The algorithms is designed based on a structural local 2DDCT sparse appearance model and an occlusion handling mechanism. In a structural local 2DDCT sparse appearance model, the energy compaction property of the transform is exploited to reduce the size of the dictionary as well as that of the candidate samples in the object representation so that the computational cost of the l_1-minimization used could be reduced. This strategy is in contrast to the existing models that use raw pixels. A holistic image reconstruction procedure is presented from the overlapped local patches that are obtained from the dictionary and the sparse codes, and then the reconstructed holistic image is used for robust occlusion detection and occlusion map generation. The occlusion map thus obtained is used for developing a novel observation model update mechanism to avoid the model degradation. A patch occlusion ratio is employed in the calculation of the confidence score to improve the tracking performance. Quantitative and qualitative performance evaluations on the two above mentioned benchmark datasets demonstrate that this second proposed tracking algorithm generally performs better than several state-of-the-art methods and the first proposed tracking method do. Despite the improved performance of this second proposed tracking algorithm, there are still some challenging attributes of OTB-50 and of VOT2016 for which the performance needs to be improved.

Finally, a third tracking algorithm is proposed by developing a scheme for collaboration between the discriminative and generative appearance models. The discriminative model is explored to estimate the position of the target and a new generative model is used to find the remaining affine parameters of the target. In the generative model, robust coding is extended to two dimensions and employed in the bilateral two dimensional PCA (2DPCA) reconstruction procedure to handle the non-Gaussian or non-Laplacian residuals by developing an IRRC technique. A 2D robust coding distance is introduced to differentiate the candidate sample from the one reconstructed from the subspace and used to compute the observation likelihood in the generative model. A method of generating a robust occlusion map from the weights obtained during the IRRC technique and a novel update mechanism of the observation model for both the kernelized correlation filters and the bilateral 2DPCA subspace are developed. Quantitative and qualitative performance evaluations on the two datasets demonstrate that this algorithm with HOG features generally outperforms the state-of-the-art methods and the other two proposed algorithms for most of the challenging attributes.

Divisions:Concordia University > Gina Cody School of Engineering and Computer Science > Electrical and Computer Engineering
Concordia University > Research Units > Centre for Signal Processing and Communications
Item Type:Thesis (PhD)
Authors:Bidare Kantharajappa, Shreyamsha Kumar
Institution:Concordia University
Degree Name:Ph. D.
Program:Electrical and Computer Engineering
Date:5 June 2019
Thesis Supervisor(s):Swamy, M.N.S. and Ahmad, M. Omair
Keywords:Visual tracking, Particle filters, Weighted least squares, Principle component analysis (PCA), Global PCA, Robust coding, non-Gaussian residuals, non-Laplacian residuals, Iteratively reweighted robust coding, Robust coding distance, Local PCA, Occlusion map, Local 2DDCT sparse appearance model, Overlapped local patches, Holistic image reconstruction, Reconstruction error, Correlation filters, Bilateral 2DPCA (B2DPCA), 2D robust coding, 2D robust coding distance.
ID Code:985643
Deposited By: SHREYAMSHA BIDARE KANTHARAJAPPA
Deposited On:14 Nov 2019 18:16
Last Modified:14 Nov 2019 18:16
Additional Information:Funders: 1. Natural Sciences and Engineering Research Council of Canada (NSERC) 2. Regroupement Stratégique en Microsystèmes du Québec (ReSMiQ) 3. Ministère de l’Éducation, de l’Enseignement Supérieur et de la Recherche (MEESR) du Québec

References:

[1] D. A. Ross, J. Lim, R.-S. Lin, and M. H. Yang, “Incremental learning for robust visual tracking,” Int. J. Comput. Vision, vol. 77, pp. 125–141, May 2008.

[2] X. Mei and H. Ling, “Robust visual tracking using L1 minimization,” in Proc. of the IEEE Int. Conf. on Comput. Vision (ICCV), Sep 2009, pp. 1436–1443.

[3] D. Wang, H. Lu, and M. H. Yang, “Online object tracking with sparse prototypes,” IEEE Trans. on Image Processing, vol. 22, no. 1, pp. 314–325, Jan 2013.

[4] A. Yilmaz, O. Javed, and M. Shah, “Object tracking: A survey,” ACM Comput. Surv., vol. 38, no. 4, pp. 1–45, Dec 2006.

[5] H. Yang, L. Shao, F. Zheng, L. Wang, and Z. Song, “Recent advances and trends in visual tracking: A review,” Neurocomputing, vol. 74, no. 18, pp. 3823–3831, Nov 2011.

[6] S. Dubuisson and C. Gonzales, “A survey of datasets for visual tracking,” Mach. Vision and Appl., vol. 27, no. 1, pp. 23–52, Jan 2016.

[7] Y. Wu, J. Lim, and M. H. Yang, “Online object tracking: A benchmark,” in Proc. of the IEEE Conf. on Comput. Vision and Pattern Recogn. (CVPR), Jun 2013, pp. 2411–2418.

[8] B. D. Lucas and T. Kanade, “An iterative image registration technique with an application to stereo vision,” in Proc. of Int. Joint Conf. on Artificial Intell. (IJCAI), Aug 1981, pp. 674–679.

[9] D. Comaniciu, V. Ramesh, and P. Meer, “Kernel-based object tracking,” IEEE Trans. on Pattern Anal. and Mach. Intell. (PAMI), vol. 25, no. 5, pp. 564–577, May 2003.

[10] I. Matthews, T. Ishikawa, and S. Baker, “The template update problem,” IEEE Trans. on Pattern Anal. and Mach. Intell. (PAMI), vol. 26, no. 6, pp. 810–815, Jun 2004.

[11] A. Adam, E. Rivlin, and I. Shimshoni, “Robust fragments-based tracking using the integral histogram,” in Proc. of the IEEE Conf. on Comput. Vision and Pattern Recogn. (CVPR), Jun 2006, pp. 798–805.

[12] J. Kwon and K. M. Lee, “Visual tracking decomposition,” in Proc. of the IEEE Conf. on Comput. Vision and Pattern Recogn. (CVPR), Jun 2010, pp. 1269–1276.

[13] C. Bao, Y. Wu, H. Ling, and H. Ji, “Real time robust L1 tracker using accelerated proximal gradient approach,” in Proc. of the IEEE Conf. on Comput. Vision and Pattern Recogn. (CVPR), Jun 2012, pp. 1830–1837.

[14] W. Ou, D. Yuan, Q. Liu, and Y. Cao, “Object tracking based on online representative sample selection via non-negative least square,” Multimedia Tools and Appl., vol. 77, no. 9, pp. 10 569–10 587, 2018.

[15] X. Jia, H. Lu, and M. H. Yang, “Visual tracking via adaptive structural local sparse appearance model,” in Proc. of the IEEE Conf. on Comput. Vision and Pattern Recogn. (CVPR), Jun 2012, pp. 1822–1829.

[16] J. Yang, R. Xu, J. Cui, and Z. Ding, “Robust visual tracking using adaptive local appearance model for smart transportation,” Multimedia Tools and Appl., vol. 75, no. 24, pp. 17 487–17 500, Dec 2016.

[17] Y. Yi, Y. Cheng, and C. Xu, “Visual tracking based on hierarchical framework and sparse representation,” Multimedia Tools and Appl., pp. 1–23, 2017.

[18] H. Wang, H. Ge, and S. Zhang, “Object tracking via 2DPCA and l2-regularization,” J. of Electrical and Comput. Engineering, vol. 2016, pp. 1–7, Jul 2016.

[19] D. Wang and H. Lu, “Object tracking via 2DPCA and l1-regularization,” IEE Signal Processing Letters, vol. 19, no. 11, pp. 711–714, Nov 2012.

[20] P. Qu, “Visual tracking with fragments-based PCA sparse representation,” Int. J. of Signal Processing, Image Processing and Pattern Recogn., vol. 7, no. 2, pp. 23–34, Feb 2014.

[21] M. Sun, D. Du, H. Lu, and L. Zhang, “Visual tracking with a structured local model,” in Proc. of the IEEE Int. Conf. on Image Processing (ICIP), Sep 2015, pp. 2855–2859.

[22] M. J. Black and A. D. Jepson, “Eigentracking: Robust matching and tracking of articulated objects using a view-based representation,” in Proc. of European Conf. on Comput. Vision (ECCV), Apr 1996, pp. 329–342.

[23] M. Black, D. Fleet, and Y. Yacoob, “A framework for modeling appearance change in image sequences,” in Proc. of the IEEE Int. Conf. on Comput. Vision (ICCV), Jan 1998, pp. 660–667.

[24] A. D. Jepson, D. J. Fleet, and T. F. El-Maraghi, “Robust online appearance models for visual tracking,” IEEE Trans. on Pattern Anal. and Mach. Intell. (PAMI), vol. 25, no. 10, pp. 1296–1311, Oct 2003.

[25] H. Wang, D. Suter, K. Schindler, and C. Shen, “Adaptive object tracking based on an effective appearance filter,” IEEE Trans. on Pattern Anal. and Mach. Intell. (PAMI), vol. 29, no. 9, pp. 1661–1667, Sep 2007.

[26] H. Grabner and H. Bischof, “On-line boosting and vision,” in Proc. of the IEEE Conf. on Comput. Vision and Pattern Recogn. (CVPR), Jun 2006, pp. 260–267.

[27] B. Babenko, M. H. Yang, and S. Belongie, “Visual tracking with online multiple instance learning,” in Proc. of the IEEE Conf. on Comput. Vision and Pattern Recogn. (CVPR), Jun 2009, pp. 983–990.

[28] H. Grabner, C. Leistner, and H. Bischof, “Semi-supervised on-line boosting for robust tracking,” in Proc. of European Conf. on Comput. Vision (ECCV), Oct 2008, pp. 234–247.

[29] S. Avidan, “Support vector tracking,” IEEE Trans. on Pattern Anal. and Mach. Intell. (PAMI), vol. 26, no. 8, pp. 1064–1072, Aug 2004.

[30] F. Wang, J. Zhang, Q. Guo, P. Liu, and D. Tu, “Robust visual tracking via discriminative structural sparse feature,” in Proc. of the Chinese Conf. on Image and Graphics Technologies, Jun 2015, pp. 438–446.

[31] R. T. Collins, Y. Liu, and M. Leordeanu, “Online selection of discriminative tracking features,” IEEE Trans. on Pattern Anal. and Mach. Intell. (PAMI), vol. 27, no. 10, pp. 1631–1643, Oct 2005.

[32] S. Avidan, “Ensemble tracking,” IEEE Trans. on Pattern Anal. and Mach. Intell. (PAMI), vol. 29, no. 2, pp. 261–271, Feb 2007.

[33] Z. Kalal, J. Matas, and K. Mikolajczyk, “P-N learning: Bootstrapping binary classifiers by structural constraints,” in Proc. of the IEEE Conf. on Comput. Vision and Pattern Recogn. (CVPR), Jun 2010, pp. 49–56.

[34] S. Wang, H. Lu, F. Yang, and M.-H. Yang, “Superpixel tracking,” in Proc. of the IEEE Int. Conf. on Comput. Vision (ICCV), Nov 2011, pp. 1323–1330.

[35] K. Zhang, L. Zhang, and M.-H. Yang, “Real-time object tracking via online dis criminative feature selection,” IEEE Trans. on Image Processing, vol. 22, no. 12, pp. 4664–4677, Dec 2013.

[36] X. Li, A. Dick, C. Shen, A. Hengel, and H. Wang, “Incremental learning of 3D-DCT compact representations for robust visual tracking,” IEEE Trans. on Pattern Anal. and Mach. Intell. (PAMI), vol. 35, no. 4, pp. 863–881, Apr 2013.

[37] A. Y. Ng and M. I. Jordan, “On discriminative vs. generative classifiers: A comparison of logistic regression and naive bayes,” in Proc. of Advances in Neural Information Processing Systems (NIPS), vol. 14, Dec 2001, pp. 841–848.

[38] J. A. Lasserre, C. M. Bishop, and T. P. Minka, “Principled hybrids of generative and discriminative models,” in Proc. of the IEEE Conf. on Comput. Vision and Pattern Recogn. (CVPR), Jun 2006, pp. 87–94.

[39] F. Tang, S. Brennan, Q. Zhao, and H. Tao, “Co-tracking using semi-supervised support vector machines,” in Proc. of the IEEE Int. Conf. on Comput. Vision (ICCV), Oct 2007, pp. 1–8.

[40] Q. Yu, T. Dinh, and G. Medioni, “Online tracking and reacquisition using co-trained generative and discriminative trackers,” in Proc. of European. Conf. on Comput. Vision (ECCV), Oct 2008, pp. 678–691.

[41] J. Santner, C. Leistner, A. Saffari, T. Pock, and H. Bischof, “PROST: Parallel robust online simple tracking,” in Proc. of the IEEE Conf. on Comput. Vision and Pattern Recogn. (CVPR), Jun 2010, pp. 723–730.

[42] W. Zhong, H. Lu, and M. H. Yang, “Robust object tracking via sparsity-based collaborative model,” in Proc. of the IEEE Conf. on Comput. Vision and Pattern Recogn. (CVPR), Jun 2012, pp. 1838–1845.

[43] W. Zhong, H. Lu, and M.-H. Yang, “Robust object tracking via sparse collaborative appearance model,” IEEE Trans. on Image Processing, vol. 23, no. 5, pp. 2356–2368, May 2014.

[44] C. Xie, J. Tan, P. Chen, J. Zhang, and L. He, “Collaborative object tracking model with local sparse representation,” J. of Visual Communication and Image Represen tation, vol. 25, no. 2, pp. 423–434, Feb 2014.

[45] H. Zhang, F. Tao, and G. Yang, “Robust visual tracking based on structured sparse representation model,” Multimedia Tools and Appl., vol. 74, no. 3, pp. 1021–1043, 2015.

[46] B. Zhuang, L. Wang, and H. Lu, “Visual tracking via shallow and deep collaborative model,” Neurocomputing, vol. 218, pp. 61–71, Dec 2016.

[47] M. Elad and M. Aharon, “Image denoising via sparse and redundant representations over learned dictionaries,” IEEE Trans. on Image processing, vol. 15, no. 12, pp. 3736–3745, Dec 2006.

[48] J. Yang, K. Yu, Y. Gong, and T. Huang, “Linear spatial pyramid matching using sparse coding for image classification,” in Proc. of the IEEE Conf. on Comput. Vision and Pattern Recogn. (CVPR), Jun 2009, pp. 1794–1801.

[49] J. Wright, A. Yang, A. Ganesh, S. Sastry, and Y. Ma, “Robust face recognition via sparse representation,” IEEE Trans. on Pattern Anal. and Mach. Intell. (PAMI), vol. 31, no. 2, pp. 210–227, Feb 2009.

[50] X. Mei and H. Ling, “Robust visual tracking and vehicle classification via sparse representation,” IEEE Trans. on Pattern Anal. and Mach. Intell. (PAMI), vol. 33, no. 11, pp. 2259–2272, Nov 2011.

[51] B. Liu, L. Yang, J. Huang, P. Meer, L. Gong, and C. Kulikowski, “Robust and fast collaborative tracking with two stage sparse optimization,” in Proc. of European Conf. on Comput. Vision (ECCV), Sep 2010, pp. 624–637.

[52] X. Mei, H. Ling, Y. Wu, E. Blasch, and L. Bai, “Minimum error bounded efficient L1 tracker with occlusion detection,” in Proc. of the IEEE Conf. on Comput. Vision and Pattern Recogn. (CVPR), Jun 2011, pp. 1257–1264.

[53] J. Yan and M. Tong, “Weighted sparse coding residual minimization for visual tracking,” in Proc. of Visual Communications and Image Processing (VCIP), Nov 2011, pp. 1–4.

[54] M. Jiang, H. Wang, and B. Wang, “Robust visual tracking based on maximum likelihood estimation,” Int. J. of Digital Content Tech. and its Appl., vol. 6, no. 22, pp. 467–474, Dec 2012.

[55] Q. Wang, F. Chen, W. Xu, and M. H. Yang, “Online discriminative object tracking with local sparse representation,” in Proc. of the IEEE Workshop on the Appl. of Comput. Vision (WACV), Jan 2012, pp. 425–432.

[56] O. Tuzel, F. Porikli, and P. Meer, “Region covariance: A fast descriptor for detection and classification,” in Proc. of European Conf. on Comput. Vision (ECCV), May 2006, pp. 589–600.

[57] F. Porikli, O. Tuzel, and P. Meer, “Covariance tracking using model update based on lie algebra,” in Proc. of the IEEE Conf. on Comput. Vision and Pattern Recogn. (CVPR), Jun 2006, pp. 728–735.

[58] Y. Wu, J. Cheng, J. Wang, and H. Lu, “Real-time visual tracking via incremental covariance tensor learning,” in Proc. of the IEEE Int. Conf. on Comput. Vision (ICCV), Sep 2009, pp. 1631–1638.

[59] M. Chen, S. K. Pang, T.-J. Cham, and A. Goh, “Visual tracking with generative template model based on riemannian manifold of covariances,” in Proc. of the IEEE Int. Conf. on Information Fusion (FUSION), Jul 2011, pp. 1–8.

[60] Y. Wu, B. Wu, J. Liu, and H. Lu, “Probabilistic tracking on riemannian manifolds,” in Proc. of the IEEE Int. Conf. on Pattern Recogn. (ICPR), Dec 2008, pp. 1–4.

[61] Y. Wu, J. Wang, and H. Lu, “Robust bayesian tracking on riemannian manifolds via fragments-based representation,” in Proc. of the IEEE Int. Conf. Acoustics, Speech and Signal Processing, (ICASSP), Apr 2009, pp. 765–768.

[62] X. Li, W. Hu, Z. Zhang, X. Zhang, M. Zhu, and J. Cheng, “Visual tracking via incremental log-euclidean riemannian subspace learning,” in Proc. of the IEEE Conf. on Comput. Vision and Pattern Recogn. (CVPR), Jun 2008, pp. 1–8.

[63] T. Wang, I. Gu, and P. Shi, “Object tracking using incremental 2D-PCA learning and ML estimation,” in Proc. of the IEEE Int. Conf. on Acoustics, Speech and Signal Processing (ICASSP), Apr 2007, pp. 933–936.

[64] M.-X. Jiang, M. Li, and H.-Y. Wang, “Visual object tracking based on 2DPCA and ML,” Mathematical Problems in Engineering, vol. 2013, pp. 1–7, 2013.

[65] D. Wang, H. Lu, and M.-H. Yang, “Robust visual tracking via least soft-threshold squares,” IEEE Trans. on Circuits and Systems for Video Tech., vol. 26, no. 9, pp. 1709–1721, Sep 2016.

[66] D. S. Bolme, J. R. Beveridge, B. A. Draper, and Y. M. Lui, “Visual object tracking using adaptive correlation filters,” in Proc. of the IEEE Conf. on Comput. Vision and Pattern Recogn. (CVPR), Jun 2010, pp. 2544–2550.

[67] M. Danelljan, F. Shahbaz Khan, M. Felsberg, and J. Van de Weijer, “Adaptive color attributes for real-time visual tracking,” in Proc. of the IEEE Conf. on Comput. Vision and Pattern Recogn. (CVPR), Jun 2014, pp. 1090–1097.

[68] M. Danelljan, G. Hager, F. Khan, and M. Felsberg, “Accurate scale estimation for robust visual tracking,” in Proc. of British Machine Vision Conf. (BMVC), Sep 2014, pp. 1–11.

[69] M. Danelljan, G. Hager, F. S. Khan, and M. Felsberg, “Discriminative scale space tracking,” IEEE Trans. on Pattern Anal. and Mach. Intell. (PAMI), vol. 39, no. 8, pp. 1561–1575, Aug 2017.

[70] J. F. Henriques, R. Caseiro, P. Martins, and J. Batista, “Exploiting the circulant structure of tracking-by-detection with kernels,” in Proc. of European Conf. on Comput. Vision (ECCV), Oct 2012, pp. 702–715.

[71] J. F. Henriques, R. Caseiro, P. Martins, and J. Batista, “High-speed tracking with kernelized correlation filters,” IEEE Trans. on Pattern Anal. and Mach. Intell. (PAMI), vol. 37, no. 3, pp. 583–596, Mar 2015.

[72] C. Ma, X. Yang, C. Zhang, and M.-H. Yang, “Long-term correlation tracking,” in Proc. of the IEEE Conf. on Comput. Vision and Pattern Recogn. (CVPR), Jun 2015, pp. 5388–5396.

[73] A. Bibi, M. Mueller, and B. Ghanem, “Target response adaptation for correlation filter tracking,” in Proc. of European Conf. on Comput. Vision (ECCV), Oct 2016, pp. 419–433.

[74] L. Bertinetto, J. Valmadre, S. Golodetz, O. Miksik, and P. H. Torr, “Staple: Complementary learners for real-time tracking,” in Proc. of the IEEE Conf. on Comput. Vision and Pattern Recogn. (CVPR), Jun 2016, pp. 1401–1409.

[75] T. Zhang, S. Liu, C. Xu, B. Liu, and M.-H. Yang, “Correlation particle filter for visual tracking,” IEEE Trans. on Image Processing, vol. 27, no. 6, pp. 2676–2687, Jun 2018.

[76] H. Kiani Galoogahi, T. Sim, and S. Lucey, “Correlation filters with limited boundaries,” in Proc. of the IEEE Conf. on Comput. Vision and Pattern Recogn. (CVPR), Jun 2015, pp. 4630–4638.

[77] M. Danelljan, G. Hager, F. Shahbaz Khan, and M. Felsberg, “Learning spatially regularized correlation filters for visual tracking,” in Proc. of the IEEE Int. Conf. on Comput. Vision (ICCV), Dec 2015, pp. 4310–4318.

[78] M. Danelljan, G. Hager, F. Shahbaz Khan, and M. Felsberg, “Convolutional features for correlation filter based visual tracking,” in Proc. of the IEEE Int. Conf. on Comput. Vision (ICCV) Workshops, Dec 2015, pp. 58–66.

[79] M. Danelljan, G. Hager, F. Shahbaz Khan, and M. Felsberg, “Adaptive decontamination of the training set: A unified formulation for discriminative visual tracking,” in Proc. of the IEEE Conf. on Comput. Vision and Pattern Recogn. (CVPR), Jun 2016, pp. 1430–1438.

[80] Y. Li and J. Zhu, “A scale adaptive kernel correlation filter tracker with feature integration,” in Proc. of European Conf. on Comput. Vision (ECCV), Sep 2014, pp. 254–265.

[81] P. F. Felzenszwalb, R. B. Girshick, D. McAllester, and D. Ramanan, “Object detection with discriminatively trained part-based models,” IEEE Trans. on Pattern Anal. and Mach. Intell. (PAMI), vol. 32, no. 9, pp. 1627–1645, Sep 2010.

[82] J. Van De Weijer, C. Schmid, J. Verbeek, and D. Larlus, “Learning color names for real-world applications,” IEEE Trans. on Image Processing, vol. 18, no. 7, pp. 1512–1523, Jul 2009.

[83] C. Ma, J.-B. Huang, X. Yang, and M.-H. Yang, “Hierarchical convolutional features for visual tracking,” in Proc. of the IEEE Int. Conf. on Comput. Vision (ICCV), Dec 2015, pp. 3074–3082.

[84] L. Wang, W. Ouyang, X. Wang, and H. Lu, “Visual tracking with fully convolutional networks,” in Proc. of the IEEE Int. Conf. on Comput. Vision (ICCV), Dec 2015, pp. 3119–3127.

[85] Y. Qi, S. Zhang, L. Qin, H. Yao, Q. Huang, J. Lim, and M.-H. Yang, “Hedged deep tracking,” in Proc. of the IEEE Conf. on Comput. Vision and Pattern Recogn. (CVPR), Jun 2016, pp. 4303–4311.

[86] M. Danelljan, A. Robinson, F. S. Khan, and M. Felsberg, “Beyond correlation filters: Learning continuous convolution operators for visual tracking,” in Proc. of European Conf. on Comput. Vision (ECCV), Oct 2016, pp. 472–488.

[87] J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei-Fei, “ImageNet: A large-scale hierarchical image database,” in Proc. of the IEEE Conf. on Comput. Vision and Pattern Recogn. (CVPR), Jun 2009, pp. 248–255.

[88] O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma, Z. Huang, A. Karpathy, A. Khosla, M. Bernstein, A. C. Berg, and L. Fei-Fei, “ImageNet large scale visual recognition challenge,” Int. J. of Comput. Vision, vol. 115, no. 3, pp. 211–252, Dec 2015.

[89] C. He, Y. F. Zheng, and S. C. Ahalt, “Object tracking using the gabor wavelet transform and the golden section algorithm,” IEEE Trans. on Multimedia, vol. 4, no. 4, pp. 528–538, Dec 2002.

[90] H. Sun, Q. Bu, and H. Zhang, “PSO based gabor wavelet feature extraction and tracking method,” in Proc. of SPIE, Nov 2008, pp. 1–8.

[91] K. Selvakumar and J. Jerome, “Robust object tracking via class aware partial least squares gabor wavelet subspace,” Procedia Engineering, vol. 64, no. 0, pp. 159 – 168, 2013.

[92] P. Prez, C. Hue, J. Vermaak, and M. Gangnet, “Color-based probabilistic tracking,” in Proc. of European. Conf. on Comput. Vision (ECCV), May 2002, vol. 2350, pp. 661–675.

[93] D. Wang, H. Lu, and Y. W. Chen, “Incremental MPCA for color object tracking,” in Proc. of the IEEE Int. Conf. on Pattern Recogn. (ICPR), Aug 2010, pp. 1751–1754.

[94] M. Kristan, A. Leonardis, J. Matas, M. Felsberg, and R. Pfl ugfelder, “The visual object tracking VOT2016 challenge results,” in Proc. of European Conf. on Comput. Vision (ECCV), Oct 2016, pp. 1–45.

[95] I. Jolliffe, Principal Component Analysis. Springer-Verlag New York, Inc., 2002.

[96] A. K. Jain, R. P. W. Duin, and J. Mao, “Statistical pattern recognition: A review,” IEEE Trans. on Pattern Anal. and Mach. Intell. (PAMI), vol. 22, no. 1, pp. 4–37, Jan 2000.

[97] M. Kirby and L. Sirovich, “Application of the karhunen-loeve procedure for the characterization of human faces,” IEEE Trans. on Pattern Anal. and Mach. Intell. (PAMI), vol. 12, no. 1, pp. 103–108, Jan 1990.

[98] N. F. Guler and S. Kocer, “Classification of EMG signals using PCA and FFT,” J. of Medical Systems, vol. 29, no. 3, pp. 241–250, Jun 2005.

[99] H. Kong, L. Wang, E. K. Teoh, X. Li, J.-G. Wang, and R. Venkateswarlu, “Generalized 2D principal component analysis for face image representation and recognition,” Neural Networks, vol. 18, no. 5, pp. 585–594, Jul 2005.

[100] J. Yang, D. Zhang, A. F. Frangi, and J.-y. Yang, “Two-dimensional PCA: A new approach to appearance-based face representation and recognition,” IEEE Trans. on Pattern Anal. and Mach. Intell. (PAMI), vol. 26, no. 1, pp. 131–137, Jan 2004.

[101] D. Zhang and Z.-H. Zhou, “(2D)2 PCA: Two-directional two-dimensional PCA for efficient face representation and recognition,” Neurocomputing, vol. 69, no. 13, pp. 224 – 231, Dec 2005.

[102] M. Isard and A. Blake, “Condensation: Conditional density propagation for visual tracking,” Int. J. Comput. Vision, vol. 29, no. 1, pp. 5–28, Aug 1998.

[103] M. Kristan, J. Matas, A. Leonardis, T. Vojir, R. Pflugfelder, G. Fernandez, G. Nebehay, F. Porikli, and L. Cehovin, “A novel performance evaluation methodology for single-target trackers,” IEEE Trans. on Pattern Anal. and Mach. Intell. (PAMI), vol. 38, no. 11, pp. 2137–2155, Nov 2016.

[104] M. Everingham, L. V. Gool, C. Williams, J. Winn, and A. Zisserman, “The pascal visual object classes (VOC) challenge,” Int. J. Comput. Vision, vol. 88, no. 2, pp. 303–338, 2010.

[105] B. Babenko, M.-H. Yang, and S. Belongie, “Robust object tracking with online multiple instance learning,” IEEE Trans. on Pattern Anal. and Mach. Intell. (PAMI), vol. 33, no. 8, pp. 1619–1632, Aug 2011.

[106] L. Cehovin, A. Leonardis, and M. Kristan, “Visual object tracking performance measures revisited,” IEEE Trans. on Image Processing, vol. 25, no. 3, pp. 1261–1274, Mar 2016.

[107] W. Hu, X. Li, X. Zhang, X. Shi, S. Maybank, and Z. Zhang, “Incremental tensor subspace learning and its applications to foreground segmentation and tracking,” Int. J. Comput. Vision, vol. 91, no. 3, pp. 303–327, Feb 2011.

[108] D. Wang, H. Lu, and C. Bo, “Visual tracking via weighted local cosine similarity,” IEEE Trans. on Cybernetics, vol. 45, no. 9, pp. 1838–1850, Sep 2015.

[109] B. K. Shreyamsha Kumar, M.N.S. Swamy, and M. Omair Ahmad, “Robust coding in a global subspace model and its collaboration with a local model for visual tracking,” Special issue on Artificial Intelligence in Multimedia Computing, Multimedia Tools and Appl., pp. 1–27, 2019.

[110] B. K. Shreyamsha Kumar, M.N.S. Swamy, and M. Omair Ahmad, “Weighted residual minimization in PCA subspace for visual tracking,” in Proc. of the IEEE Int. Symp. on Circuits and Systems (ISCAS), May 2016, pp. 986–989.

[111] T. Zhou, H. Bhaskar, K. Xie, J. Yang, X. He, and P. Shi, “Online learning of multi-feature weights for robust object tracking,” in Proc. of the IEEE Int. Conf. on Image Processing (ICIP), Sep 2015, pp. 725–729.

[112] T. Zhou, H. Bhaskar, F. Liu, J. Yang, and P. Cai, “Online learning and joint optimization of combined spatial-temporal models for robust visual tracking,” Neurocomputing, vol. 226, pp. 221–237, Feb 2017.

[113] X. Zhang, G.-S. Xia, Q. Lu, W. Shen, and L. Zhang, “Visual object tracking by correlation filters and online learning,” ISPRS Journal of Photogrammetry and Remote Sensing, vol. 140, pp. 77–89, Jun 2018.

[114] T. Zhou, F. Liu, H. Bhaskar, and J. Yang, “Robust visual tracking via online discriminative and low-rank dictionary learning,” IEEE Trans. on Cybernetics, vol. 48, no. 9, pp. 2643–2655, Sep 2018.

[115] M. Yang, D. Zhang, J. Yang, and D. Zhang, “Robust sparse coding for face recognition,” in Proc. of the IEEE Conf. on Comput. Vision and Pattern Recogn. (CVPR), Jun 2011, pp. 625–632.

[116] M. J. Black and A. D. Jepson, “Eigentracking: Robust matching and tracking of articulated objects using a view-based representation,” Int. J. of Comput. Vision, vol. 26, no. 1, pp. 63–84, Jan 1998.

[117] D. Wang, H. Lu, Z. Xiao, and M.-H. Yang, “Inverse sparse tracker with a locally weighted distance metric,” IEEE Trans. on Image Processing, vol. 24, no. 9, pp. 2646–2657, Sep 2015.

[118] B. K. Shreyamsha Kumar, M.N.S. Swamy, and M. Omair Ahmad, “Visual tracking via bilateral 2DPCA and robust coding,” in Proc. of the IEEE Canadian Conf. on Electrical and Comput. Engineering (CCECE), May 2016, pp. 1–4.

[119] D. Wang, H. Lu, and C. Bo, “Fast and robust object tracking via probability continuous outlier model,” IEEE Trans. on Image Processing, vol. 24, no. 12, pp. 5166–5176, Dec 2015.

[120] N. Wang and D.-Y. Yeung, “Learning a deep compact image representation for visual tracking,” in Proc. of Advances in Neural Information Processing Systems (NIPS), Dec 2013, pp. 809–817.

[121] http://www.cs.toronto.edu/~dross/ivt. Accessed: Jun, 2014.

[122] http://www.dabi.temple.edu/~hbling/code/L1-APG_release.zip. Accessed: Aug, 2015.

[123] http://faculty.ucmerced.edu/mhyang/project/tip13_prototype/TIP12-SP.htm. Accessed: Mar, 2014.

[124] https://github.com/huchuanlu/15_9. Accessed: Feb, 2016.

[125] https://github.com/huchuanlu/15_12. Accessed: Feb, 2016.

[126] http://faculty.ucmerced.edu/mhyang/project/cvpr13_lss/LSST-MatlabCode-V1.zip. Accessed: Sep, 2015.

[127] https://github.com/huchuanlu/14_1. Accessed: Oct, 2015.

[128] http://winsty.net/dlt.html. Accessed: Apr, 2017.

[129] X. Mei, H. Ling, Y. Wu, E. P. Blasch, and L. Bai, “Efficient minimum error bounded particle resampling l1 tracker with occlusion detection,” IEEE Trans. on Image Processing, vol. 22, no. 7, pp. 2661–2675, Jul 2013.

[130] B. Liu, J. Huang, L. Yang, and C. Kulikowsk, “Robust tracking using local sparse appearance model and K -selection,” in Proc. of the IEEE Conf. on Comput. Vision and Pattern Recogn. (CVPR), Jun 2011, pp. 1313–1320.

[131] P. Dai, Y. Luo, W. Liu, C. Li, and Y. Xie, “Robust visual tracking via part-based sparsity model,” in Proc. of the IEEE Int. Conf. on Acoustics, Speech and Signal Processing (ICASSP), May 2013, pp. 1803–1806.

[132] X. You, X. Li, Z. He, and X. Zhang, “A robust local sparse tracker with global consistency constraint,” Signal Processing, vol. 111, pp. 308–318, Jun 2015.

[133] B. K. Shreyamsha Kumar, M.N.S. Swamy, and M. Omair Ahmad, “Visual track ing using structural local DCT sparse appearance model with occlusion detection,” Multimedia Tools and Appl., vol. 78, no. 6, pp. 7243–7266, 2019.

[134] B. K. Shreyamsha Kumar, M.N.S. Swamy, and M. Omair Ahmad, “Structural local DCT sparse appearance model for visual tracking,” in Proc. of the IEEE Int. Symp. on Circuits and Systems (ISCAS), May 2015, pp. 1194–1197.

[135] C. Lin and C.-M. Pun, “Tracking object using particle filter and DCT features,” in Proc. of Int. Conf. on Advances in Comput. Science and Engineering, Jun 2013, pp. 167–169.

[136] H. Chen, W. Zhang, X. Zhao, and M. Tan, “DCT representations based appearance model for visual tracking,” in Proc. of the IEEE Int. Conf. on Robotics and Biometrics (ROBIO), Dec 2014, pp. 1614–1619.

[137] G. Feng and J. Jiang, “JPEG compressed image retrieval via statistical features,” Pattern Recogn., vol. 36, no. 4, pp. 977–985, Apr 2003.

[138] D. He, Z. Gu, and N. Cercone, “Efficient image retrieval in DCT domain by hypothesis testing,” in Proc. of the IEEE Int. Conf. on Image Processing (ICIP), Nov 2009, pp. 225–228.

[139] B. K. Shreyamsha Kumar, M.N.S. Swamy, and M. Omair Ahmad, “Multiresolution DCT decomposition for multifocus image fusion,” in Proc. of the IEEE Canadian Conf. on Electrical and Comput. Engineering (CCECE), May 2013, pp. 1–4.

[140] B. K. Shreyamsha Kumar, “Multifocus and multispectral image fusion based on pixel significance using discrete cosine harmonic wavelet transform,” Signal, Image and Video Processing, vol. 7, no. 6, pp. 1125–1143, Nov 2013.

[141] M. Shivamurti and S. Narasimhan, “Analytic discrete cosine harmonic wavelet transform (ADCHWT) and its application to signal/image denoising,” in Proc. of the IEEE Int. Conf. on Signal Processing and Communications (SPCOM), Jul 2010, pp. 1–5.

[142] B. K. Shreyamsha Kumar, “Image denoising using discrete cosine harmonic wavelets,” Sensor Signal Process. Group, Central Research Laboratory, Bangalore, India, Tech. Rep., Jul 2010.

[143] Z. M. Hafed and M. D. Levine, “Face recognition using the discrete cosine transform,” Int. J. Comput. Vision, vol. 43, no. 3, pp. 167–188, Jul 2001.

[144] M. Uzair, A. Mahmood, and A. S. Mian, “Hyperspectral face recognition using 3D-DCT and partial least squares.” in Proc. of British Machine Vision Conf. (BMVC), Sept 2013, pp. 1–10.

[145] D. Chen, Q. Liu, M. Sun, and J. Yang, “Mining appearance models directly from compressed video,” IEEE Trans. on Multimedia, vol. 10, no. 2, pp. 268–276, Feb 2008.

[146] Y. Zhong, H. Zhang, and A. K. Jain, “Automatic caption localization in compressed video,” IEEE Trans. on Pattern Anal. and Mach. Intell. (PAMI), vol. 22, no. 4, pp. 385–392, Apr 2000.

[147] W. Pennerbaker and J. Mithchell, JPEG: Still image data compression standard. Springer Science & Business Media, 1992.

[148] J. Mairal, F. Bach, J. Ponce, and G. Sapiro, “Online learning for matrix factorization and sparse coding,” J. Mach. Learn. Res., vol. 11, pp. 19–60, Mar 2010.

[149] http://faculty.ucmerced.edu/mhyang/project/cvpr12_jia_project.htm. Accessed: Oct, 2013.

[150] https://github.com/chhshen/DCT-Tracking. Accessed: Dec, 2013.

[151] S. Cao, X. Wang, and K. Xiang, “Visual object tracking based on motion-adaptive particle filter under complex dynamics,” EURASIP J. on Image and Video Processing, vol. 2017, no. 1, p. 76, 2017.

[152] B. K. Shreyamsha Kumar, M.N.S. Swamy, and M. Omair Ahmad, “Visual tracking based on correlation filter and robust coding in bilateral 2DPCA subspace,” IEEE Access, vol. 6, pp. 73 052–73 067, 2018.

[153] S. A. Siena, “Improving the design and use of correlation filters in visual tracking,” Ph.D. dissertation, Carnegie Mellon University, Dept. Elect. Comput. Eng, USA, 2017.

[154] B. Bai, Y. Li, J. Fan, C. Price, and Q. Shen, “Object tracking based on incremental Bi-2DPCA learning with sparse structure,” Applied optics, vol. 54, no. 10, pp. 2897–2907, Apr 2015.

[155] Y. Sui, Y. Tang, L. Zhang, and G. Wang, “Visual tracking via subspace learning: A discriminative approach,” Int. J. of Comput. Vision, vol. 126, no. 5, pp. 515–536, 2018.

[156] Y. Sui, G. Wang, L. Zhang, and M.-H. Yang, “Exploiting spatial-temporal locality of tracking via structured dictionary learning,” IEEE Trans. on Image Processing, vol. 27, no. 3, pp. 1282–1296, Mar 2018.

[157] Y. Sui and L. Zhang, “Visual tracking via locally structured Gaussian process regression,” IEEE Signal Processing Letters, vol. 22, no. 9, pp. 1331–1335, Feb 2015.

[158] Y. Sui, Y. Tang, and L. Zhang, “Discriminative low-rank tracking,” in Proc. of the IEEE Int. Conf. on Comput. Vision, Dec 2015, pp. 3002–3010.

[159] https://scholar.harvard.edu/files/suiyao/files/ddl.zip . Accessed: Apr, 2018.

[160] https://scholar.harvard.edu/files/suiyao/files/dlr.zip . Accessed: Apr, 2018.

[161] https://scholar.harvard.edu/files/suiyao/files/lsgpr.zip . Accessed: Apr, 2018.

[162] https://scholar.harvard.edu/files/suiyao/files/dsl.zip . Accessed: Apr, 2018.

[163] http://www.cvl.isy.liu.se/research/objrec/visualtracking/colvistrack/ColorTracking_code.zip. Accessed: Jun, 2015.

[164] http://www.robots.ox.ac.uk/~joao/circulant/tracker_release2.zip. Accessed: Jun, 2015.
All items in Spectrum are protected by copyright, with all rights reserved. The use of items is governed by Spectrum's terms of access.

Repository Staff Only: item control page

Downloads per month over past year

Research related to the current document (at the CORE website)
- Research related to the current document (at the CORE website)
Back to top Back to top