Deep visual tracking: Review and experimental comparison

被引:414
作者
Li, Peixia [1 ]
Wang, Dong [1 ]
Wang, Lijun [1 ]
Lu, Huchuan [1 ]
机构
[1] Dalian Univ Technol, Fac Elect Informat & Elect Engn, Sch Informat & Commun Engn, Dalian, Peoples R China
关键词
Visual tracking; Deep learning; CNN; RNN; Pre-training; Online learning; OBJECT TRACKING; NEURAL-NETWORKS;
D O I
10.1016/j.patcog.2017.11.007
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Recently, deep learning has achieved great success in visual tracking. The goal of this paper is to review the state-of-the-art tracking methods based on deep learning. First, we introduce the background of deep visual tracking, including the fundamental concepts of visual tracking and related deep learning algorithms. Second, we categorize the existing deep-learning-based trackers into three classes according to network structure, network function and network training. For each categorize, we explain its analysis of the network perspective and analyze papers in different categories. Then, we conduct extensive experiments to compare the representative methods on the popular OTB-100, TC-128 and VOT2015 benchmarks. Based on our observations, we conclude that: (1) The usage of the convolutional neural network (CNN) model could significantly improve the tracking performance. (2) The trackers using the convolutional neural network (CNN) model to distinguish the tracked object from its surrounding background could get more accurate results, while using the CNN model for template matching is usually faster. (3) The trackers with deep features perform much better than those with low-level hand-crafted features. (4) Deep features from different convolutional layers have different characteristics and the effective combination of them usually results in a more robust tracker. (5) The deep visual trackers using end-to-end networks usually perform better than the trackers merely using feature extraction networks. (6) For visual tracking, the most suitable network training method is to per-train networks with video information and online fine-tune them with subsequent observations. Finally, we summarize our manuscript and highlight our insights, and point out the further trends for deep visual tracking. (C) 2017 Elsevier Ltd. All rights reserved.
引用
收藏
页码:323 / 338
页数:16
相关论文
共 105 条
[1]  
Adam A., 2006, IEEE C COMPUTER VISI, V1, P798, DOI [DOI 10.1109/CVPR.2006.256, 10.1109/CVPR.2006.256]
[2]  
[Anonymous], 2016, PROC CVPR IEEE, DOI DOI 10.1109/CVPR.2016.465
[3]  
[Anonymous], 2011, Proceedings of the Conference on Empirical Methods in Natural Language Processing, EMNLP '11
[4]  
[Anonymous], 2006, P 2006 IEEE COMP SOC
[5]  
[Anonymous], 2012, PROC CVPR IEEE
[6]  
[Anonymous], 2015, NeurIPS
[7]  
[Anonymous], 2015, PROCIEEE CONFCOMPUT
[8]   A tutorial on particle filters for online nonlinear/non-Gaussian Bayesian tracking [J].
Arulampalam, MS ;
Maskell, S ;
Gordon, N ;
Clapp, T .
IEEE TRANSACTIONS ON SIGNAL PROCESSING, 2002, 50 (02) :174-188
[9]   Support vector tracking [J].
Avidan, S .
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2004, 26 (08) :1064-1072
[10]   Ensemble tracking [J].
Avidan, Shai .
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2007, 29 (02) :261-271