Sparse multinomial logistic regression: Fast algorithms and generalization bounds

被引:611
作者
Krishnapuram, B
Carin, L
Figueiredo, MAT
Hartemink, AJ
机构
[1] Siemens Med Solut USA Inc, Comp Aided Diag & Therapy Grp, Malvern, PA 19355 USA
[2] Duke Univ, Dept Elect Engn, Durham, NC 27708 USA
[3] Inst Super Tecn, Dept Elect & Comp Engn, Inst Telecommun, P-1049001 Lisbon, Portugal
[4] Duke Univ, Dept Comp Sci, Durham, NC 27708 USA
基金
美国国家科学基金会;
关键词
supervised learning; classification; sparsity; Bayesian inference; multinomial logistic regression; bound optimization; expectation maximization (EM); learning theory; generalization bounds;
D O I
10.1109/TPAMI.2005.127
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Recently developed methods for learning sparse classifiers are among the state-of-the-art in supervised learning. These methods learn classifiers that incorporate weighted sums of basis functions with sparsity-promoting priors encouraging the weight estimates to be either significantly large or exactly zero. From a learning-theoretic perspective, these methods control the capacity of the learned classifier by minimizing the number of basis functions used, resulting in better generalization. This paper presents three contributions related to learning sparse classifiers. First, we introduce a true multiclass formulation based on multinomial logistic regression. Second, by combining a bound optimization approach with a component-wise update procedure, we derive fast exact algorithms for learning sparse multiclass classifiers that scale favorably in both the number of training samples and the feature dimensionality, making them applicable even to large data sets in high-dimensional feature spaces. To the best of our knowledge, these are the first algorithms to perform exact multinomial logistic regression with a sparsity-promoting prior. Third, we show how nontrivial generalization bounds can be derived for our classifier in the binary case. Experimental results on standard benchmark data sets attest to the accuracy, sparsity, and efficiency of the proposed methods.
引用
收藏
页码:957 / 968
页数:12
相关论文
共 42 条
[1]  
[Anonymous], 1997, A Wavelet Tour of Signal Processing
[2]  
Bartlett P. L., 2003, Journal of Machine Learning Research, V3, P463, DOI 10.1162/153244303321897690
[3]   MULTINOMIAL LOGISTIC-REGRESSION ALGORITHM [J].
BOHNING, D .
ANNALS OF THE INSTITUTE OF STATISTICAL MATHEMATICS, 1992, 44 (01) :197-200
[4]   MONOTONICITY OF QUADRATIC-APPROXIMATION ALGORITHMS [J].
BOHNING, D ;
LINDSAY, BG .
ANNALS OF THE INSTITUTE OF STATISTICAL MATHEMATICS, 1988, 40 (04) :641-663
[5]   Atomic decomposition by basis pursuit [J].
Chen, SSB ;
Donoho, DL ;
Saunders, MA .
SIAM JOURNAL ON SCIENTIFIC COMPUTING, 1998, 20 (01) :33-61
[6]  
Cristianini M., 2000, INTRO SUPPORT VECTOR
[7]   Sparse on-line Gaussian processes [J].
Csató, L ;
Opper, M .
NEURAL COMPUTATION, 2002, 14 (03) :641-668
[8]  
DELEEUW J, 1993, BLOCK RELAXATION MET
[9]   MAXIMUM LIKELIHOOD FROM INCOMPLETE DATA VIA EM ALGORITHM [J].
DEMPSTER, AP ;
LAIRD, NM ;
RUBIN, DB .
JOURNAL OF THE ROYAL STATISTICAL SOCIETY SERIES B-METHODOLOGICAL, 1977, 39 (01) :1-38
[10]   Optimally sparse representation in general (nonorthogonal) dictionaries via l1 minimization [J].
Donoho, DL ;
Elad, M .
PROCEEDINGS OF THE NATIONAL ACADEMY OF SCIENCES OF THE UNITED STATES OF AMERICA, 2003, 100 (05) :2197-2202