Iterative Reweighted l1 and l1 Methods for Finding Sparse Solutions

被引:354
作者
Wipf, David [1 ]
Nagarajan, Srikantan [1 ]
机构
[1] Univ Calif San Francisco, Biomagnet Imaging Lab, San Francisco, CA 94143 USA
关键词
Compressive sensing (CS); iterative reweighting algorithms; sparse representations; underdetermined inverse problems; REPRESENTATIONS; MINIMIZATION; ALGORITHMS; SELECTION;
D O I
10.1109/JSTSP.2010.2042413
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
A variety of practical methods have recently been introduced for finding maximally sparse representations from overcomplete dictionaries, a central computational task in compressive sensing applications as well as numerous others. Many of the underlying algorithms rely on iterative reweighting schemes that produce more focal estimates as optimization progresses. Two such variants are iterative reweighted l(1) and l(2) minimization; however, some properties related to convergence and sparse estimation, as well as possible generalizations, are still not clearly understood or fully exploited. In this paper, we make the distinction between separable and non-separable iterative reweighting algorithms. The vast majority of existing methods are separable, meaning the weighting of a given coefficient at each iteration is only a function of that individual coefficient from the previous iteration (as opposed to dependency on all coefficients). We examine two such separable reweighting schemes: an l(2) method from Chartrand and Yin (2008) and an l(1) approach from Candes et al. ( 2008), elaborating on convergence results and explicit connections between them. We then explore an interesting non-separable alternative that can be implemented via either l(2) or l(1) reweighting and maintains several desirable properties relevant to sparse recovery despite a highly non-convex underlying cost function. For example, in the context of canonical sparse estimation problems, we prove uniform superiority of this method over the minimum l(1) solution in that, 1) it can never do worse when implemented with reweighted l(1), and 2) for any dictionary and sparsity profile, there will always exist cases where it does better. These results challenge the prevailing reliance on strictly convex (and separable) penalty functions for finding sparse solutions. We then derive a new non-separable variant with similar properties that exhibits further performance improvements in empirical tests. Finally, we address natural extensions to group sparsity problems and non-negative sparse coding.
引用
收藏
页码:317 / 329
页数:13
相关论文
共 31 条
[1]  
[Anonymous], 2006, Advances in Neural Information Processing Systems
[2]  
[Anonymous], 1969, Nonlinear programming: a unified approach
[3]  
[Anonymous], 2006, Journal of the Royal Statistical Society, Series B
[4]  
Boyd S., 2004, CONVEX OPTIMIZATION, VFirst, DOI DOI 10.1017/CBO9780511804441
[5]   On the Uniqueness of Nonnegative Sparse Solutions to Underdetermined Systems of Equations [J].
Bruckstein, Alfred M. ;
Elad, Michael ;
Zibulevsky, Michael .
IEEE TRANSACTIONS ON INFORMATION THEORY, 2008, 54 (11) :4813-4820
[6]   Enhancing Sparsity by Reweighted l1 Minimization [J].
Candes, Emmanuel J. ;
Wakin, Michael B. ;
Boyd, Stephen P. .
JOURNAL OF FOURIER ANALYSIS AND APPLICATIONS, 2008, 14 (5-6) :877-905
[7]   Iteratively reweighted algorithms for compressive sensing [J].
Chartrand, Rick ;
Yin, Wotao .
2008 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING, VOLS 1-12, 2008, :3869-+
[8]   Iteratively Reweighted Least Squares Minimization for Sparse Recovery [J].
Daubechies, Ingrid ;
Devore, Ronald ;
Fornasier, Massimo ;
Guentuerk, C. Sinan .
COMMUNICATIONS ON PURE AND APPLIED MATHEMATICS, 2010, 63 (01) :1-38
[9]  
DAVIES M, 2008, 1899 IRISA
[10]  
Donoho D., 2004, For most large underdetermined systems of linear equations the minimal L1-norm near solution approximates the sparsest solution