An incremental training method for the probabilistic RBF network

被引:33
作者
Constantinopoulos, Constantinos [1 ]
Likas, Aristidis [1 ]
机构
[1] Univ Ioannina, Dept Comp Sci, GR-45110 Ioannina, Greece
来源
IEEE TRANSACTIONS ON NEURAL NETWORKS | 2006年 / 17卷 / 04期
关键词
classification; decision boundary; mixture models; neural networks; probabilistic modeling; radial basis function networks;
D O I
10.1109/TNN.2006.875982
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The probabilistic radial basis function (PRBF) network constitutes a probabilistic version of the RBF network for classification that extends the typical mixture model approach to classification by allowing the sharing of mixture components among all classes. The typical learning method of PRBF for a classification task employs the expectation-maximization (EM) algorithm and depends strongly on the initial parameter values. In this paper, we propose a technique for incremental training of the PRBF network for classification. The proposed algorithm starts with a single component and incrementally adds more components at appropriate positions in the data space. The addition of a new component is based on criteria for detecting a region in the data space that is crucial for the classification task. After the addition of all components, the algorithm splits every component of the network into subcomponents, each one corresponding to a different class. Experimental results using several well-known classification data sets indicate that the incremental method provides solutions of superior classification performance compared to the hierarchical PRBF training method. We also conducted comparative experiments with the support vector machines method and present the obtained results along with a qualitative comparison of the two approaches.
引用
收藏
页码:966 / 974
页数:9
相关论文
共 27 条
[1]   Reducing multiclass to binary: A unifying approach for margin classifiers [J].
Allwein, EL ;
Schapire, RE ;
Singer, Y .
JOURNAL OF MACHINE LEARNING RESEARCH, 2001, 1 (02) :113-141
[2]  
[Anonymous], LIBSVM LIB SUPPORT V
[3]  
[Anonymous], 1997, MACH LEARN
[4]   MULTIDIMENSIONAL BINARY SEARCH TREES USED FOR ASSOCIATIVE SEARCHING [J].
BENTLEY, JL .
COMMUNICATIONS OF THE ACM, 1975, 18 (09) :509-517
[5]   Choosing starting values for the EM algorithm for getting the highest likelihood in multivariate Gaussian mixture models [J].
Biernacki, C ;
Celeux, G ;
Govaert, G .
COMPUTATIONAL STATISTICS & DATA ANALYSIS, 2003, 41 (3-4) :561-575
[6]  
Bishop C. M., 1996, Neural networks for pattern recognition
[7]  
CORTES C, 1995, MACH LEARN, V20, P273, DOI 10.1023/A:1022627411411
[8]   MAXIMUM LIKELIHOOD FROM INCOMPLETE DATA VIA EM ALGORITHM [J].
DEMPSTER, AP ;
LAIRD, NM ;
RUBIN, DB .
JOURNAL OF THE ROYAL STATISTICAL SOCIETY SERIES B-METHODOLOGICAL, 1977, 39 (01) :1-38
[9]   An efficient sequential learning algorithm for growing and pruning RBF (GAP-RBF) networks [J].
Huang, GB ;
Saratchandran, P ;
Sundararajan, N .
IEEE TRANSACTIONS ON SYSTEMS MAN AND CYBERNETICS PART B-CYBERNETICS, 2004, 34 (06) :2284-2292
[10]   Adhesion of an aromatic thermosetting copolyester with copper foils [J].
Huang, YQ ;
McCormick, JJ ;
Economy, J .
POLYMERS FOR ADVANCED TECHNOLOGIES, 2005, 16 (01) :1-5