Analysis of sparse representation and blind source separation

被引:307
作者
Li, YQ [1 ]
Cichocki, A
Amari, S
机构
[1] Lab Adv Brain Signal Proc, Wako, Saitama 3510198, Japan
[2] RIKEN, Brain Sci Inst, Lab Math Neurosci, Wako, Saitama 3510198, Japan
[3] S China Univ Technol, Automat Sci & Engn Inst, Guangzhou, Peoples R China
[4] Warsaw Univ Technol, Dept Elect Engn, Warsaw, Poland
关键词
D O I
10.1162/089976604773717586
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In this letter, we analyze a two-stage cluster-then-l(1)-optimization approach for sparse representation of a data matrix, which is also a promising approach for blind source separation (BSS) in which fewer sensors than sources are present. First, sparse representation (factorization) of a data matrix is discussed. For a given overcomplete basis matrix, the corresponding sparse solution (coefficient matrix) with minimum l(1) norm is unique with probability one, which can be obtained using a standard linear programming algorithm. The equivalence of the l(1)-norm solution and the l(0)-norm solution is also analyzed according to a probabilistic framework. If the obtained l(1)-norm solution is sufficiently sparse, then it is equal to the l(0)-norm solution with a high probability. Furthermore, the l(1)-norm solution is robust to noise, but the l(0)-norm solution is not, showing that the l(1)-norm is a good sparsity measure. These results can be used as a recoverability analysis of BSS, as discussed. The basis matrix in this article is estimated using a clustering algorithm followed by normalization, in which the matrix columns are the cluster centers of normalized data column vectors. Zibulevsky, Pearlmutter, Boll, and Kisilev (2000) used this kind of two-stage approach in underdetermined BSS. Our recoverability analysis shows that this approach can deal with the situation in which the sources are overlapped to some degree in the analyzed domain and with the case in which the source number is unknown. It is also robust to additive noise and estimation error in the mixing matrix. Finally, four simulation examples and an EEG data analysis example are presented to illustrate the algorithm's utility and demonstrate its performance.
引用
收藏
页码:1193 / 1234
页数:42
相关论文
共 16 条
[1]  
[Anonymous], 2002, Adaptive Blind Signal and Image Processing: Learning Algorithms and Applications
[2]   Underdetermined blind source separation using sparse representations [J].
Bofill, P ;
Zibulevsky, M .
SIGNAL PROCESSING, 2001, 81 (11) :2353-2362
[3]   Atomic decomposition by basis pursuit [J].
Chen, SSB ;
Donoho, DL ;
Saunders, MA .
SIAM JOURNAL ON SCIENTIFIC COMPUTING, 1998, 20 (01) :33-61
[4]  
DELGADO KK, 2003, NEURAL COMPUT, V15, P349, DOI DOI 10.1162/089976603762552951
[5]   Optimally sparse representation in general (nonorthogonal) dictionaries via l1 minimization [J].
Donoho, DL ;
Elad, M .
PROCEEDINGS OF THE NATIONAL ACADEMY OF SCIENCES OF THE UNITED STATES OF AMERICA, 2003, 100 (05) :2197-2202
[6]   A variational method for learning sparse and overcomplete representations [J].
Girolami, M .
NEURAL COMPUTATION, 2001, 13 (11) :2517-2532
[7]  
JANG GJ, 2003, ADV NEURAL INFORMATI, V15
[8]   Learning the parts of objects by non-negative matrix factorization [J].
Lee, DD ;
Seung, HS .
NATURE, 1999, 401 (6755) :788-791
[9]   Blind source separation of more sources than mixtures using overcomplete representations [J].
Lee, TW ;
Lewicki, MS ;
Girolami, M ;
Sejnowski, TJ .
IEEE SIGNAL PROCESSING LETTERS, 1999, 6 (04) :87-90
[10]   Learning overcomplete representations [J].
Lewicki, MS ;
Sejnowski, TJ .
NEURAL COMPUTATION, 2000, 12 (02) :337-365