Multitask Compressive Sensing

被引:461
作者
Ji, Shihao [1 ]
Dunson, David [2 ]
Carin, Lawrence [1 ]
机构
[1] Duke Univ, Dept Elect & Comp Engn, Durham, NC 27708 USA
[2] Duke Univ, Inst Stat & Decis Sci, Durham, NC 27708 USA
关键词
Compressive sensing (CS); hierarchical Bayesian modeling; multitask learning; relevance vector machine (RVM); simultaneous sparse approximation; SIMULTANEOUS SPARSE APPROXIMATION; MULTIPLE TASKS; INFORMATION; ALGORITHMS; EFFICIENT;
D O I
10.1109/TSP.2008.2005866
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
Compressive sensing (CS) is a framework whereby one performs N nonadaptive measurements to constitute a vector upsilon is an element of R-N, with upsilon used to recover an approximation (u) over cap is an element of R-M to a desired signal u is an element of R-M, with N << M; this is performed under the assumption that a is sparse in the basis represented by the matrix Psi is an element of R-MxM. It has been demonstrated that with appropriate design of the compressive measurements used to define upsilon, the decompressive mapping upsilon -> (u) over cap may be performed with error parallel to u - (u) over cap parallel to(2)(2) having asymptotic properties analogous to those of the best adaptive transform-coding algorithm applied in the basis T. The mapping upsilon -> (u) over cap constitutes an inverse problem, often solved using l(1) regularization or related techniques. In most previous research, if L > 1 sets of compressive measurements {upsilon(i)}(i)=1, L are performed, each of the associated {(u) over cap (i)}(i)=1, L are recovered one at a time, independently. In many applications the L "tasks" defined by the mappings upsilon(i) -> (u) over cap (i) are not statistically independent, and it may be possible to improve the performance of the inversion if statistical interrelationships are exploited. In this paper, we address this problem within a multitask learning setting, wherein the mapping v; - u; for each task corresponds to inferring the parameters (here, wavelet coefficients) associated with the desired signal u;, and a shared prior is placed across all of the L tasks. Under this hierarchical Bayesian modeling, data from all L tasks contribute toward inferring a posterior on the hyperparameters, and once the shared prior is thereby inferred, the data from each of the L individual tasks is then employed to estimate the task-dependent wavelet coefficients. An empirical Bayesian procedure for the estimation of hyperparameters is considered; two fast inference algorithms extending the relevance vector machine (RVM) are developed. Example results on several data sets demonstrate the effectiveness and robustness of the proposed algorithms.
引用
收藏
页码:92 / 106
页数:15
相关论文
共 50 条
[1]  
Ando RK, 2005, J MACH LEARN RES, V6, P1817
[2]  
[Anonymous], 2005, Distributed compressed sensing
[3]  
Baxter J, 1995, COLT
[4]  
Baxter J., 2000, J ARTIF INTELL RES
[5]  
Bishop C.M., 2000, P C UNC ART INT, P46
[6]   A Bayesian semiparametric model for random-effects meta-analysis [J].
Burr, D ;
Doss, H .
JOURNAL OF THE AMERICAN STATISTICAL ASSOCIATION, 2005, 100 (469) :242-251
[7]   Robust uncertainty principles:: Exact signal reconstruction from highly incomplete frequency information [J].
Candès, EJ ;
Romberg, J ;
Tao, T .
IEEE TRANSACTIONS ON INFORMATION THEORY, 2006, 52 (02) :489-509
[8]  
CARLIN B. P., 2000, C&H TEXT STAT SCI
[9]   Multitask learning [J].
Caruana, R .
MACHINE LEARNING, 1997, 28 (01) :41-75
[10]   Atomic decomposition by basis pursuit [J].
Chen, SSB ;
Donoho, DL ;
Saunders, MA .
SIAM JOURNAL ON SCIENTIFIC COMPUTING, 1998, 20 (01) :33-61