Pruning error minimization in least squares support vector machines

被引:150
作者
de Kruif, BJ [1 ]
de Vries, TJA [1 ]
机构
[1] Univ Twente, Drebbel Inst Mechatron, NL-7500 AE Enschede, Netherlands
来源
IEEE TRANSACTIONS ON NEURAL NETWORKS | 2003年 / 14卷 / 03期
关键词
function approximation; pruning; regression; support vector machine (SVM);
D O I
10.1109/TNN.2003.810597
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The support vector machine (SVM) is a method for classification and for function approximation. This method commonly makes use of an epsilon-insensitive cost function, meaning that errors smaller than epsilon remain unpunished. As an alternative, a least squares support vector machine (LSSVM) uses a quadratic cost function. When the LSSVM method is used for function approximation, a nonsparse solution is obtained. The sparseness is imposed by pruning, i.e., recursively solving the approximation problem and subsequently omitting data that has a small error in the previous pass. However, omitting data with a small approximation error in the previous pass does not reliably predict what the error will be after the sample has been omitted. In this paper, a procedure is introduced that selects from a data set the training sample that will introduce the smallest approximation error when it will be omitted. It is shown that this pruning scheme outperforms the standard one.
引用
收藏
页码:696 / 702
页数:7
相关论文
共 18 条