Robust Principal Component Analysis?

被引:5301
作者
Candes, Emmanuel J. [1 ,2 ]
Li, Xiaodong [1 ,2 ]
Ma, Yi [3 ,4 ]
Wright, John [4 ]
机构
[1] Stanford Univ, Dept Math, Stanford, CA 94305 USA
[2] Stanford Univ, Dept Stat, Stanford, CA 94305 USA
[3] Univ Illinois, Dept Elect & Comp Engn, Coordinated Sci Lab 145, Urbana, IL 61801 USA
[4] Microsoft Res Asia, Visual Comp Grp, Beijing 100080, Peoples R China
基金
美国国家科学基金会;
关键词
Algorithms; Theory; Principal components; robustness vis-a-vis outliers; nuclear-norm minimization; l(1)-norm minimization; duality; low-rank matrices; sparsity; video surveillance; MATRIX COMPLETION; APPROXIMATION; ALGORITHMS;
D O I
10.1145/1970392.1970395
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
This article is about a curious phenomenon. Suppose we have a data matrix, which is the superposition of a low-rank component and a sparse component. Can we recover each component individually? We prove that under some suitable assumptions, it is possible to recover both the low-rank and the sparse components exactly by solving a very convenient convex program called Principal Component Pursuit; among all feasible decompositions, simply minimize a weighted combination of the nuclear norm and of the l(1) norm. This suggests the possibility of a principled approach to robust principal component analysis since our methodology and results assert that one can recover the principal components of a data matrix even though a positive fraction of its entries are arbitrarily corrupted. This extends to the situation where a fraction of the entries are missing as well. We discuss an algorithm for solving this optimization problem, and present applications in the area of video surveillance, where our methodology allows for the detection of objects in a cluttered background, and in the area of face recognition, where it offers a principled way of removing shadows and specularities in images of faces.
引用
收藏
页数:37
相关论文
共 54 条
[1]  
[Anonymous], P INT C COMP VIS ICC
[2]   Lambertian reflectance and linear subspaces [J].
Basri, R ;
Jacobs, DW .
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2003, 25 (02) :218-233
[3]   A Fast Iterative Shrinkage-Thresholding Algorithm for Linear Inverse Problems [J].
Beck, Amir ;
Teboulle, Marc .
SIAM JOURNAL ON IMAGING SCIENCES, 2009, 2 (01) :183-202
[4]   NESTA: A Fast and Accurate First-Order Method for Sparse Recovery [J].
Becker, Stephen ;
Bobin, Jerome ;
Candes, Emmanuel J. .
SIAM JOURNAL ON IMAGING SCIENCES, 2011, 4 (01) :1-39
[5]   Laplacian eigenmaps for dimensionality reduction and data representation [J].
Belkin, M ;
Niyogi, P .
NEURAL COMPUTATION, 2003, 15 (06) :1373-1396
[6]  
Bertsekas D.P., 2014, Constrained Optimization and Lagrange Multiplier Methods
[7]   A SINGULAR VALUE THRESHOLDING ALGORITHM FOR MATRIX COMPLETION [J].
Cai, Jian-Feng ;
Candes, Emmanuel J. ;
Shen, Zuowei .
SIAM JOURNAL ON OPTIMIZATION, 2010, 20 (04) :1956-1982
[8]   Robust uncertainty principles:: Exact signal reconstruction from highly incomplete frequency information [J].
Candès, EJ ;
Romberg, J ;
Tao, T .
IEEE TRANSACTIONS ON INFORMATION THEORY, 2006, 52 (02) :489-509
[9]   The Power of Convex Relaxation: Near-Optimal Matrix Completion [J].
Candes, Emmanuel J. ;
Tao, Terence .
IEEE TRANSACTIONS ON INFORMATION THEORY, 2010, 56 (05) :2053-2080
[10]   Matrix Completion With Noise [J].
Candes, Emmanuel J. ;
Plan, Yaniv .
PROCEEDINGS OF THE IEEE, 2010, 98 (06) :925-936