Model selection for support vector machine classification

被引:163
作者
Gold, C [1 ]
Sollich, P
机构
[1] CALTECH, Pasadena, CA 91125 USA
[2] Kings Coll London, Dept Math, London WC2R 2LS, England
关键词
support vector machines; classification; model selection; probabilistic methods; Bayesian evidence;
D O I
10.1016/S0925-2312(03)00375-8
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
We address the problem of model selection for Support Vector Machine (SVM) classification. For fixed functional form of the kernel, model selection amounts to tuning kernel parameters and the slack penalty coefficient C. We begin by reviewing a recently developed probabilistic framework for SVM classification. An extension to the case of SVMs with quadratic slack penalties is given and a simple approximation for the evidence is derived, which can be used as a criterion for model selection. We also derive the exact gradients of the evidence in terms of posterior averages and describe how they can be estimated numerically using Hybrid Monte-Carlo techniques. Though computationally demanding, the resulting gradient ascent algorithm is a useful baseline tool for probabilistic SVM model selection, since it can locate maxima of the exact (unapproximated) evidence. We then perform extensive experiments on several benchmark data sets. The aim of these experiments is to compare the performance of probabilistic model selection criteria with alternatives based on estimates of the test error, namely the so-called "span estimate" and Wahba's Generalized Approximate Cross-Validation (GACV) error. We find that all the "simple" model criteria (Laplace evidence approximations, and the span and GACV error estimates) exhibit multiple local optima with respect to the hyperparameters. While some of these give performance that is competitive with results from other approaches in the literature, a significant fraction lead to rather higher test errors. The results for the evidence gradient ascent method show that also the exact evidence exhibits local optima, but these give test errors which are much less variable and also consistently lower than for the simpler model selection criteria. (C) 2003 Elsevier B.V. All rights reserved.
引用
收藏
页码:221 / 249
页数:29
相关论文
共 30 条
[1]  
[Anonymous], CRGTR931 U TOR
[2]  
Barber D, 1997, ADV NEUR IN, V9, P340
[3]   A tutorial on Support Vector Machines for pattern recognition [J].
Burges, CJC .
DATA MINING AND KNOWLEDGE DISCOVERY, 1998, 2 (02) :121-167
[4]  
Chapelle O, 2000, ADV NEUR IN, V12, P230
[5]   Choosing multiple parameters for support vector machines [J].
Chapelle, O ;
Vapnik, V ;
Bousquet, O ;
Mukherjee, S .
MACHINE LEARNING, 2002, 46 (1-3) :131-159
[6]  
Cristianini N, 1999, ADV NEUR IN, V11, P204
[7]  
Cristianini N, 2000, Intelligent Data Analysis: An Introduction
[8]  
JAAKKOLA TS, 1999, P 7 INT WORKSH ART I
[9]  
KRAUTH W, 1998, ADV COMPUTER SIMULAT
[10]   Moderating the outputs of support vector machine classifiers [J].
Kwok, JTY .
IEEE TRANSACTIONS ON NEURAL NETWORKS, 1999, 10 (05) :1018-1031