For most large underdetermined systems of equations, the minimal l1-norm near-solution approximates the sparsest near-solution

被引:354
作者
Donoho, David L. [1 ]
机构
[1] Stanford Univ, Dept Stat, Stanford, CA 94305 USA
关键词
D O I
10.1002/cpa.20131
中图分类号
O29 [应用数学];
学科分类号
070104 ;
摘要
We consider inexact linear equations y approximate to Phi x where y is a given vector in R-n, Phi is a given n x m matrix, and we wish to find x(0,epsilon) as sparse as possible while obeying parallel to y - Phi x(0,epsilon)parallel to(2) <= epsilon. In general, this requires combinatorial optimization and so is considered intractable. On the other hand, the l(1)-minimization problem min parallel to x parallel to(1) subject to parallel to y - Phi x parallel to(2) <= epsilon is convex and is considered tractable. We show that for most Phi, if the optimally sparse approximation x(0,epsilon) is sufficiently sparse, then the solution x(1,epsilon) of the l(1)-minimization problem is a good approximation to x(0,epsilon). We suppose that the columns of Phi are normalized to the unit l(2)-norm, and we place uniform measure on such Phi. We study the underdetermined case where m similar to tau n and tau > 1, and prove the existence of rho = rho(tau) > 0 and C = C(rho, tau) so that for large n and for all Phi's except a negligible fraction, the following approximate sparse solution property of Phi holds: for every y having an approximation parallel to y - Phi x(0)parallel to(2) < epsilon by a coefficient vector x(0) is an element of R-m with fewer than rho . n nonzeros, parallel to x(1,epsilon) - x(0)parallel to(2) <= C. epsilon. This has two implications. First, for most Phi, whenever the combinatorial optimization result x(0,epsilon) would be very sparse, x(1,epsilon) is a good approximation to x(0,epsilon). Second, suppose we are given noisy data obeying y = Phi x(0) + z where the unknown x(0) is known to be sparse and the noise parallel to z parallel to(2) <= epsilon. For most Phi, noise-tolerant l(1)-minimization will stably recover x(0) from y in the presence of noise z. We also study the barely determined case m = n and reach parallel conclusions by slightly different arguments. Proof techniques include the use of almost-spherical sections in Banach space theory and concentration of measure for eigenvalues of random matrices. (c) 2006 Wiley Periodicals, Inc.
引用
收藏
页码:907 / 934
页数:28
相关论文
共 32 条
[1]  
CANDES EJ, IN PRESS IEEE T INFO
[2]  
Chen Y, 1998, NONCON OPTIM ITS APP, V20, P1
[3]  
COIFMAN RR, 1993, PROGRESS IN WAVELET ANALYSIS AND APPLICATIONS, P77
[4]  
Davidson KR, 2001, HANDBOOK OF THE GEOMETRY OF BANACH SPACES, VOL 1, P317, DOI 10.1016/S1874-5849(01)80010-3
[5]   For most large underdetermined systems of linear equations the minimal l1-norm solution is also the sparsest solution [J].
Donoho, DL .
COMMUNICATIONS ON PURE AND APPLIED MATHEMATICS, 2006, 59 (06) :797-829
[6]   Stable recovery of sparse overcomplete representations in the presence of noise [J].
Donoho, DL ;
Elad, M ;
Temlyakov, VN .
IEEE TRANSACTIONS ON INFORMATION THEORY, 2006, 52 (01) :6-18
[7]  
DONOHO DL, 1992, J ROY STAT SOC B MET, V54, P41
[8]   Optimally sparse representation in general (nonorthogonal) dictionaries via l1 minimization [J].
Donoho, DL ;
Elad, M .
PROCEEDINGS OF THE NATIONAL ACADEMY OF SCIENCES OF THE UNITED STATES OF AMERICA, 2003, 100 (05) :2197-2202
[9]   Uncertainty principles and ideal atomic decomposition [J].
Donoho, DL ;
Huo, XM .
IEEE TRANSACTIONS ON INFORMATION THEORY, 2001, 47 (07) :2845-2862
[10]  
Dvoretzky A., 1961, PROC INTERNAT SYMPOS, P123