Training a support vector machine in the primal

被引:488
作者
Chapelle, Olivier [1 ]
机构
[1] Max Planck Inst Biol Cybernet, D-72076 Tubingen, Germany
关键词
D O I
10.1162/neco.2007.19.5.1155
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Most literature on support vector machines (SVMs) concentrates on the dual optimization problem. In this letter, we point out that the primal problem can also be solved efficiently for both linear and nonlinear SVMs and that there is no reason for ignoring this possibility. On the contrary, from the primal point of view, new families of algorithms for large-scale SVM training can be investigated.
引用
收藏
页码:1155 / 1178
页数:24
相关论文
共 37 条
[1]  
[Anonymous], P SIAM INT C DAT MIN
[2]  
[Anonymous], 1998, Encyclopedia of Biostatistics
[3]  
[Anonymous], 1990, SUPPORT VECTOR LEARN
[4]  
[Anonymous], 2003, P 20 INT C MACH LEAR
[5]  
[Anonymous], 1994, INTRO CONJUGATE GRAD
[6]  
[Anonymous], JMLR
[7]  
Bakir G., 2005, ADV NEURAL INFORM PR, V17, P81
[8]  
BORDES A, 2005, FAST KERNEL CLASSIFI
[9]  
Boser B. E., 1992, Proceedings of the Fifth Annual ACM Workshop on Computational Learning Theory, P144, DOI 10.1145/130385.130401
[10]  
Boyd S., 2004, CONVEX OPTIMIZATION