Two-Stage Learning to Predict Human Eye Fixations via SDAEs

被引:114
作者
Han, Junwei [1 ]
Zhang, Dingwen [1 ]
Wen, Shifeng [1 ]
Guo, Lei [1 ]
Liu, Tianming [2 ]
Li, Xuelong [3 ]
机构
[1] Northwestern Polytech Univ, Sch Automat, Xian 710072, Peoples R China
[2] Univ Georgia, Dept Comp Sci, Athens, GA 30602 USA
[3] Chinese Acad Sci, Xian Inst Opt & Precis Mech, State Key Lab Transient Opt & Photon, Ctr OPT IMagery Anal & Learning, Xian 710119, Peoples R China
基金
美国国家科学基金会; 国家教育部博士点专项基金资助;
关键词
Deep networks; eye fixation prediction; saliency detection; stacked denoising autoencoders ( SDAEs); VISUAL SALIENCY; OBJECT DETECTION; RETRIEVAL; ATTENTION; AUTOENCODERS; FRAMEWORK; MODEL;
D O I
10.1109/TCYB.2015.2404432
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Saliency detection models aiming to quantitatively predict human eye-attended locations in the visual field have been receiving increasing research interest in recent years. Unlike traditional methods that rely on hand-designed features and contrast inference mechanisms, this paper proposes a novel framework to learn saliency detection models from raw image data using deep networks. The proposed framework mainly consists of two learning stages. At the first learning stage, we develop a stacked denoising autoencoder (SDAE) model to learn robust, representative features from raw image data under an unsupervised manner. The second learning stage aims to jointly learn optimal mechanisms to capture the intrinsic mutual patterns as the feature contrast and to integrate them for final saliency prediction. Given the input of pairs of a center patch and its surrounding patches represented by the features learned at the first stage, a SDAE network is trained under the supervision of eye fixation labels, which achieves both contrast inference and contrast integration simultaneously. Experiments on three publically available eye tracking benchmarks and the comparisons with 16 state-of-the-art approaches demonstrate the effectiveness of the proposed framework.
引用
收藏
页码:487 / 498
页数:12
相关论文
共 60 条
[21]   Efficient, simultaneous detection of multi-class geospatial targets based on visual saliency modeling and discriminative learning of sparse coding [J].
Han, Junwei ;
Zhou, Peicheng ;
Zhang, Dingwen ;
Cheng, Gong ;
Guo, Lei ;
Liu, Zhenbao ;
Bu, Shuhui ;
Wu, Jun .
ISPRS JOURNAL OF PHOTOGRAMMETRY AND REMOTE SENSING, 2014, 89 :37-48
[22]   An Object-Oriented Visual Saliency Detection Framework Based on Sparse Coding Representations [J].
Han, Junwei ;
He, Sheng ;
Qian, Xiaoliang ;
Wang, Dongyang ;
Guo, Lei ;
Liu, Tianming .
IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2013, 23 (12) :2009-2021
[23]   Unsupervised extraction of visual attention objects in color images [J].
Han, JW ;
Ngan, KN ;
Li, MJ ;
Zhang, HH .
IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2006, 16 (01) :141-145
[24]  
Harel J., 2007, ADV NEURAL INFORM PR, P545, DOI DOI 10.7551/MITPRESS/7503.003.0073
[25]  
He KM, 2014, LECT NOTES COMPUT SC, V8691, P346, DOI [arXiv:1406.4729, 10.1007/978-3-319-10578-9_23]
[26]  
Hinton GE, 1993, Advances in Neural Information Processing Systems, P3
[27]   A fast learning algorithm for deep belief nets [J].
Hinton, Geoffrey E. ;
Osindero, Simon ;
Teh, Yee-Whye .
NEURAL COMPUTATION, 2006, 18 (07) :1527-1554
[28]   Blind Image Quality Assessment via Deep Learning [J].
Hou, Weilong ;
Gao, Xinbo ;
Tao, Dacheng ;
Li, Xuelong .
IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2015, 26 (06) :1275-1286
[29]  
Hou X., 2008, P ADV NEUR INF PROC, P681
[30]  
Hou X., 2007, IEEE C COMP VIS PATT, V2007, P1, DOI DOI 10.1109/CVPR.2007.383267