ADAPTIVE NEAREST NEIGHBOR PATTERN-CLASSIFICATION

被引:95
作者
GEVA, S
SITTE, J
机构
[1] Faculty of Information Technology, Queensland University of Technology, Brisbane, Queensland, 4001
来源
IEEE TRANSACTIONS ON NEURAL NETWORKS | 1991年 / 2卷 / 02期
关键词
D O I
10.1109/72.80344
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
We describe a variant of nearest neighbor pattern classification (NN) [1] and supervised learning by learning vector quantization (LVQ) [2], [3]. The decision surface mapping method, which we call DSM, is a fast supervised learning algorithm, and is a member of the LVQ family of algorithms. A relatively small number of prototypes are selected from a training set of correctly classified samples. The training set is then used to adapt these prototypes to map the decision surface separating the classes. This algorithm is compared with NN pattern classification, learning vector quantization (LVQ1) [2], and a two-layer perception trained by error backpropagation [4]. When the class boundaries are sharply defined (i.e., no classification error in the training set) the DSM algorithm outperforms these methods with respect to error rates, learning rates, and the number of prototypes required to describe class boundaries.
引用
收藏
页码:318 / 322
页数:5
相关论文
共 5 条
[1]   NEAREST NEIGHBOR PATTERN CLASSIFICATION [J].
COVER, TM ;
HART, PE .
IEEE TRANSACTIONS ON INFORMATION THEORY, 1967, 13 (01) :21-+
[2]   CONDENSED NEAREST NEIGHBOR RULE [J].
HART, PE .
IEEE TRANSACTIONS ON INFORMATION THEORY, 1968, 14 (03) :515-+
[3]  
KOHONEN T, 1990, ADVANCED NEURAL COMPUTERS, P137
[4]  
KOHONEN T, 1988, SELF ORG ASS MEMORY, P199
[5]  
RUMMELHART DE, PARALEL DISTRIBUTED, V1, P318