2D-LPP: A two-dimensional extension of locality preserving projections

被引:144
作者
Chen, Sibao
Zhao, Haifeng
Kong, Min
Luo, Bin [1 ]
机构
[1] Anhui Univ, Minist Educ, Key Lab Intelligent Comp & Signal Proc, Hefei 230039, Peoples R China
[2] Univ Sci & Technol China, Dept Elect Engn & Informat Sci, Anhui USTC Iflytek Lab, Hefei 230027, Peoples R China
基金
中国国家自然科学基金;
关键词
locality preserving projection (LPP); two-dimensional projection; linear projection; dimensionality reduction;
D O I
10.1016/j.neucom.2006.10.032
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
We consider the problem of locality preserving projections (LPP) in two-dimensional sense. Recently, LPP was proposed for dimensionality reduction, which can detect the intrinsic manifold structure of data and preserve the local information. As far as matrix data, such as images, are concerned, they are often vectorized for LPP algorithm to find the intrinsic manifold structure. While the dimension of matrix data is usually very high, LPP cannot be implemented because of the singularity of matrix. In this paper, we propose a method called two-dimensional locality preserving projections (2D-LPP) for image recognition,. which is based directly on 2D image matrices rather than 1 D vectors as conventional LPP does. From an algebraic procedure, we induce that 2D-LPP is related to two other linear projection methods, which are based directly on image matrix: 2D-PCA and 2D-LDA. 2D-PCA and 2D-LDA preserve the Euclidean structure of image space, while 2D-LPP finds an embedding that preserves local information and detects the intrinsic image manifold structure. To evaluate the performance of 2D-LPP, several experiments are conducted on the ORL face database, the Yale face database and a digit dataset. The high recognition rates and speed. show that 2D-LPP achieves better performance than 2D-PCA and 2D-LDA. Experiments even show that conducting PCA after 2D-LPP achieves higher recognition than LPP at the same dimension of feature spaces. (c) 2006 Elsevier B.V. All rights reserved.
引用
收藏
页码:912 / 921
页数:10
相关论文
共 30 条
  • [1] [Anonymous], 2003, NIPS
  • [2] [Anonymous], 2004, P IEEE COMP SOC C CO, DOI DOI 10.1109/CVPR.2004.1315232
  • [3] Face recognition by independent component analysis
    Bartlett, MS
    Movellan, JR
    Sejnowski, TJ
    [J]. IEEE TRANSACTIONS ON NEURAL NETWORKS, 2002, 13 (06): : 1450 - 1464
  • [4] Eigenfaces vs. Fisherfaces: Recognition using class specific linear projection
    Belhumeur, PN
    Hespanha, JP
    Kriegman, DJ
    [J]. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 1997, 19 (07) : 711 - 720
  • [5] Laplacian eigenmaps for dimensionality reduction and data representation
    Belkin, M
    Niyogi, P
    [J]. NEURAL COMPUTATION, 2003, 15 (06) : 1373 - 1396
  • [6] Cai D., 2005, SIGIR 2005. Proceedings of the Twenty-Eighth Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, P3, DOI 10.1145/1076034.1076039
  • [7] Class-information-incorporated principal component analysis
    Chen, SC
    Sun, TK
    [J]. NEUROCOMPUTING, 2005, 69 (1-3) : 216 - 223
  • [8] Supervised kernel locality preserving projections for face recognition
    Cheng, J
    Liu, QS
    Lu, HQ
    Chen, YW
    [J]. NEUROCOMPUTING, 2005, 67 : 443 - 449
  • [9] MAXIMUM LIKELIHOOD FROM INCOMPLETE DATA VIA EM ALGORITHM
    DEMPSTER, AP
    LAIRD, NM
    RUBIN, DB
    [J]. JOURNAL OF THE ROYAL STATISTICAL SOCIETY SERIES B-METHODOLOGICAL, 1977, 39 (01): : 1 - 38
  • [10] Duda RO, 2006, PATTERN CLASSIFICATI