Iteratively Reweighted Least Squares Minimization for Sparse Recovery

被引:971
作者
Daubechies, Ingrid [1 ,2 ]
Devore, Ronald [3 ]
Fornasier, Massimo [4 ]
Guentuerk, C. Sinan [5 ]
机构
[1] Princeton Univ, Dept Math, Princeton, NJ 08544 USA
[2] Princeton Univ, Program Appl & Computat Math, Princeton, NJ 08544 USA
[3] Univ S Carolina, Dept Math, Columbia, SC 29208 USA
[4] Austrian Acad Sci, Johann Radon Inst Computat & Appl Math, A-4040 Linz, Austria
[5] Courant Inst, New York, NY 10012 USA
基金
美国国家科学基金会;
关键词
SIGNAL RECOVERY; UNCERTAINTY PRINCIPLES; RECONSTRUCTION; POLYTOPES;
D O I
10.1002/cpa.20303
中图分类号
O29 [应用数学];
学科分类号
070104 ;
摘要
Under certain conditions (known as the restricted isometry property, or RIP) on the m x N matrix Phi (where m < N), vectors x is an element of R-N that arc sparse (i.e., have most of their entries equal to 0) can be recovered exactly from y := Phi x even though Phi(-1) (y) is typically an (N-m)-dimensional hyperplane; in addition x is then equal to the element in Phi(-1) (y) of minimal l(1)-norm. This minimal element can be identified via linear programming algorithms. We study an alternative method of determining x, as the limit of an iteratively reweighted least squares (IRLS) algorithm. The main step of this IRLS finds, for a given weight vector w, the element in Phi(-1) (y) with smallest l(2)(w)-norm. If x((n)) is the solution at iteration step n, then the new weight w((n)) is defined by w(i)((n)) := [|x(i)((n))|(2) + epsilon(2)(n)](-1/2), i = 1,...,N, for a decreasing sequence of adaptively defined epsilon(n); this updated weight is then used to obtain x((n+1)) and the process is repeated. We prove that when Phi satisfies the RIP conditions, the sequence x((n)) converges for all y, regardless of whether Phi(-1) (y) contains a sparse vector. If there is a sparse vector in Phi(-1) (y), then the limit is this sparse vector, and when x((n)) is sufficiently close to the limit, the remaining steps of the algorithm converge exponentially fast (linear convergence in the terminology of numerical optimization). The same algorithm with the "heavier" weight w(i)((n)) = [|x(i)((n))|(2) + epsilon(2)(n)](-1+tau/2), i = 1,...,N, where 0 < tau < 1, can recover sparse solutions as well; more importantly, we show its local convergence is superlinear and approaches a quadratic rate for tau approaching 0. (C) 2009 Wiley Periodicals, Inc.
引用
收藏
页码:1 / 38
页数:38
相关论文
共 41 条
[1]   A Simple Proof of the Restricted Isometry Property for Random Matrices [J].
Baraniuk, Richard ;
Davenport, Mark ;
DeVore, Ronald ;
Wakin, Michael .
CONSTRUCTIVE APPROXIMATION, 2008, 28 (03) :253-263
[2]   IEEE-SPS and connexions - An open access education collaboration [J].
Baraniuk, Richard G. ;
Burrus, C. Sidney ;
Thierstein, E. Joel .
IEEE SIGNAL PROCESSING MAGAZINE, 2007, 24 (06) :6-+
[3]  
BYRD R, 1979, 313 J HOPK U
[4]  
Candes E. J., 2006, P INT C MATH MADR SP, V3, P1433, DOI DOI 10.4171/022-3/69
[5]   Robust uncertainty principles:: Exact signal reconstruction from highly incomplete frequency information [J].
Candès, EJ ;
Romberg, J ;
Tao, T .
IEEE TRANSACTIONS ON INFORMATION THEORY, 2006, 52 (02) :489-509
[6]   Decoding by linear programming [J].
Candes, EJ ;
Tao, T .
IEEE TRANSACTIONS ON INFORMATION THEORY, 2005, 51 (12) :4203-4215
[7]  
CANDES EJ, 2007, IMA COURS COMPR SAMP
[8]   Near-optimal signal recovery from random projections: Universal encoding strategies? [J].
Candes, Emmanuel J. ;
Tao, Terence .
IEEE TRANSACTIONS ON INFORMATION THEORY, 2006, 52 (12) :5406-5425
[9]   Stable signal recovery from incomplete and inaccurate measurements [J].
Candes, Emmanuel J. ;
Romberg, Justin K. ;
Tao, Terence .
COMMUNICATIONS ON PURE AND APPLIED MATHEMATICS, 2006, 59 (08) :1207-1223
[10]   Enhancing Sparsity by Reweighted l1 Minimization [J].
Candes, Emmanuel J. ;
Wakin, Michael B. ;
Boyd, Stephen P. .
JOURNAL OF FOURIER ANALYSIS AND APPLICATIONS, 2008, 14 (5-6) :877-905