Adaptive sparseness for supervised learning

被引:352
作者
Figueiredo, MAT [1 ]
机构
[1] Univ Tecn Lisboa, Inst Telecommun, Inst Super Tecn, P-1049001 Lisbon, Portugal
[2] Univ Tecn Lisboa, Dept Elect & Comp Engn, Inst Super Tecn, P-1049001 Lisbon, Portugal
关键词
supervised learning; classification; regression; sparseness; feature selection; kernel methods; expectation-maximization algorithm;
D O I
10.1109/TPAMI.2003.1227989
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The goal of supervised teaming is to infer a functional mapping based on a set of training examples. To achieve good generalization, it is necessary to control the "complexity" of the learned function. In Bayesian approaches, this is done by adapting a prior for the parameters of the function being teamed. We propose a Bayesian approach to supervised teaming, which leads to sparse solutions; that is, in which irrelevant parameters are automatically set exactly to zero. Other ways to obtain sparse classifiers (such as Laplacian priors, support vector machines) involve (hyper)parameters which control the degree of sparseness of the resulting classifiers; these parameters have to be somehow adjusted/estimated from the training data. In contrast, our approach does not involve any (hyper)parameters to be adjusted or estimated. This is achieved by a hierarchical-Bayes interpretation of the Laplacian prior, which is then modified by the adoption of a Jeffreys' noninformative hyperprior. Implementation is carried out by an expectation-maximization (EM) algorithm. Experiments with several benchmark data sets show that the proposed approach yields state-of-the-art performance. In particular, our method outperforms SVMs and performs competitively with the best alternative techniques, although it involves no tuning or adjustment of sparseness-controlling hyperparameters.
引用
收藏
页码:1150 / 1159
页数:10
相关论文
共 32 条
[1]   BAYESIAN-ANALYSIS OF BINARY AND POLYCHOTOMOUS RESPONSE DATA [J].
ALBERT, JH ;
CHIB, S .
JOURNAL OF THE AMERICAN STATISTICAL ASSOCIATION, 1993, 88 (422) :669-679
[2]  
[Anonymous], 1983, CLASSIFICATION REGRE
[3]  
[Anonymous], 1980, STAT DECISION THEORY
[4]  
Bishop C. M., 2000, P 16 C UNC ART INT S, P46
[5]   On different facets of regularization theory [J].
Chen, Z ;
Haykin, S .
NEURAL COMPUTATION, 2002, 14 (12) :2791-2846
[6]  
CHEN ZM, 1998, J GREY SYSTEM, V1, P33
[7]  
Cristianini N., 2000, INTRO SUPPORT VECTOR, DOI [10.1017/CBO9780511801389, DOI 10.1017/CBO9780511801389]
[8]   IDEAL SPATIAL ADAPTATION BY WAVELET SHRINKAGE [J].
DONOHO, DL ;
JOHNSTONE, IM .
BIOMETRIKA, 1994, 81 (03) :425-455
[9]  
Fahrmeir L., 1994, MULTIVARIATE STAT MO, DOI 10.1007/978-1-4899-0010-4
[10]  
Figueiredo MAT, 2002, ADV NEUR IN, V14, P697