Multimodal Deep Autoencoder for Human Pose Recovery

被引:524
作者
Hong, Chaoqun [1 ]
Yu, Jun [2 ]
Wan, Jian [2 ]
Tao, Dacheng [3 ,4 ]
Wang, Meng [5 ]
机构
[1] Xiamen Univ Technol, Coll Comp & Informat Engn, Xiamen 361024, Peoples R China
[2] Hangzhou Dianzi Univ, Sch Comp Sci, Hangzhou 310018, Zhejiang, Peoples R China
[3] Univ Technol Sydney, Ctr Quantum Computat & Intelligent Syst, Ultimo, NSW 2007, Australia
[4] Univ Technol Sydney, Fac Engn & Informat Technol, Ultimo, NSW 2007, Australia
[5] Hefei Univ Technol, Sch Comp Sci & Informat Engn, Hefei 230009, Peoples R China
基金
中国国家自然科学基金; 澳大利亚研究理事会;
关键词
Human pose recovery; deep learning; multi-modal learning; hypergraph; back propagation; 3D HUMAN POSE; RECOGNITION; TRACKING; POINTS;
D O I
10.1109/TIP.2015.2487860
中图分类号
TP18 [人工智能理论];
学科分类号
140502 [人工智能];
摘要
Video-based human pose recovery is usually conducted by retrieving relevant poses using image features. In the retrieving process, the mapping between 2D images and 3D poses is assumed to be linear in most of the traditional methods. However, their relationships are inherently non-linear, which limits recovery performance of these methods. In this paper, we propose a novel pose recovery method using non-linear mapping with multi-layered deep neural network. It is based on feature extraction with multimodal fusion and back-propagation deep learning. In multimodal fusion, we construct hypergraph Laplacian with low-rank representation. In this way, we obtain a unified feature description by standard eigen-decomposition of the hypergraph Laplacian matrix. In back-propagation deep learning, we learn a non-linear mapping from 2D images to 3D poses with parameter fine-tuning. The experimental results on three data sets show that the recovery error has been reduced by 20%-25%, which demonstrates the effectiveness of the proposed method.
引用
收藏
页码:5659 / 5670
页数:12
相关论文
共 43 条
[1]
Recovering 3D human pose from monocular images [J].
Agarwal, A ;
Triggs, B .
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2006, 28 (01) :44-58
[2]
[Anonymous], 2005, PROC CVPR IEEE
[3]
[Anonymous], 2010, ICML 10 JUNE 21 24 2
[4]
[Anonymous], 2008, P ICML, DOI 10.1145/1390156.1390294
[5]
[Anonymous], 2012, TECH U DENMARK
[6]
Shape matching and object recognition using shape contexts [J].
Belongie, S ;
Malik, J ;
Puzicha, J .
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2002, 24 (04) :509-522
[7]
Bengio Y., 2006, Advances in Neural Information Processing Systems, V19, DOI DOI 10.7551/MITPRESS/7503.003.0024
[8]
Learning Deep Architectures for AI [J].
Bengio, Yoshua .
FOUNDATIONS AND TRENDS IN MACHINE LEARNING, 2009, 2 (01) :1-127
[9]
Twin Gaussian Processes for Structured Prediction [J].
Bo, Liefeng ;
Sminchisescu, Cristian .
INTERNATIONAL JOURNAL OF COMPUTER VISION, 2010, 87 (1-2) :28-52
[10]
Brand M., 1999, ICCV, V2, P1237, DOI DOI 10.1109/ICCV.1999.790422