Empirical measure of multiclass generalization performance: The K-winner machine case

被引:7
作者
Ridella, S [1 ]
Zunino, R [1 ]
机构
[1] Univ Genoa, Dept Biophys & Elect Engn, I-16145 Genoa, Italy
来源
IEEE TRANSACTIONS ON NEURAL NETWORKS | 2001年 / 12卷 / 06期
关键词
generalization theory; K-winner machine; VC-dimension; vector quantization;
D O I
10.1109/72.963791
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Combining the K-winner machine (KWM) model with empirical measurements of a classifier's Vapnik-Chervonenkis (VC)-dim gives two major results. First, analytical derivations refine the theory that characterizes the generalization performances of binary classifiers. Second, a straightforward extension of the theoretical framework yields bounds to the generalization error for multiclass problems.
引用
收藏
页码:1525 / 1529
页数:5
相关论文
共 8 条
[1]  
[Anonymous], 1982, ESTIMATION DEPENDENC
[2]  
BENHUR A, 2000, ADV NEUR INF P NIPS
[3]  
Kohonen T., 1989, Self-Organization and Associative Memory, V3rd
[4]   K-winner machines for pattern classification [J].
Ridella, S ;
Rovetta, S ;
Zunino, R .
IEEE TRANSACTIONS ON NEURAL NETWORKS, 2001, 12 (02) :371-385
[5]   Measuring the VC-dimension using optimized experimental design [J].
Shao, XH ;
Cherkassky, V ;
Li, W .
NEURAL COMPUTATION, 2000, 12 (08) :1969-1986
[6]   MEASURING THE VC-DIMENSION OF A LEARNING-MACHINE [J].
VAPNIK, V ;
LEVIN, E ;
LECUN, Y .
NEURAL COMPUTATION, 1994, 6 (05) :851-876
[7]  
Vapnik V, 1999, NATURE STAT LEARNING
[8]  
Vapnik V., 1998, STAT LEARNING THEORY, V1, P2