A Structure-Adaptive Hybrid RBF-BP Classifier with an Optimized Learning Strategy

被引:9
作者
Wen, Hui [1 ]
Xie, Weixin [1 ]
Pei, Jihong [1 ]
机构
[1] Shenzhen Univ, ATR Key Lab Natl Def, Shenzhen 518060, Peoples R China
基金
中国国家自然科学基金;
关键词
DIFFERENTIAL EVOLUTION; NEURAL-NETWORK; DESIGN; ALGORITHM; SCHEME;
D O I
10.1371/journal.pone.0164719
中图分类号
O [数理科学和化学]; P [天文学、地球科学]; Q [生物科学]; N [自然科学总论];
学科分类号
070301 [无机化学]; 070403 [天体物理学]; 070507 [自然资源与国土空间规划学]; 090105 [作物生产系统与生态工程];
摘要
This paper presents a structure-adaptive hybrid RBF-BP (SAHRBF-BP) classifier with an optimized learning strategy. SAHRBF-BP is composed of a structure-adaptive RBF network and a BP network of cascade, where the number of RBF hidden nodes is adjusted adaptively according to the distribution of sample space, the adaptive RBF network is used for nonlinear kernel mapping and the BP network is used for nonlinear classification. The optimized learning strategy is as follows: firstly, a potential function is introduced into training sample space to adaptively determine the number of initial RBF hidden nodes and node parameters, and a form of heterogeneous samples repulsive force is designed to further optimize each generated RBF hidden node parameters, the optimized structure-adaptive RBF network is used for adaptively nonlinear mapping the sample space; then, according to the number of adaptively generated RBF hidden nodes, the number of subsequent BP input nodes can be determined, and the overall SAHRBF-BP classifier is built up; finally, different training sample sets are used to train the BP network parameters in SAHRBF-BP. Compared with other algorithms applied to different data sets, experiments show the superiority of SAHRBF-BP. Especially on most low dimensional and large number of data sets, the classification performance of SAHRBF-BP outperforms other training SLFNs algorithms.
引用
收藏
页数:41
相关论文
共 41 条
[1]
[Anonymous], 2012, SPACING DIAERESIS EF, DOI DOI 10.1007/978-3-642-35289-8
[2]
Bors A. G., 1994, Digital Signal Processing, V4, P173, DOI 10.1006/dspr.1994.1016
[3]
A Growing and Pruning Method for Radial Basis Function Networks [J].
Bortman, M. ;
Aladjem, M. .
IEEE TRANSACTIONS ON NEURAL NETWORKS, 2009, 20 (06) :1039-1045
[4]
Self-Adaptive Evolutionary Extreme Learning Machine [J].
Cao, Jiuwen ;
Lin, Zhiping ;
Huang, Guang-Bin .
NEURAL PROCESSING LETTERS, 2012, 36 (03) :285-305
[5]
Chang C.C., 2003, LIBSVM: a library for Support Vector Machines
[6]
An incremental training method for the probabilistic RBF network [J].
Constantinopoulos, Constantinos ;
Likas, Aristidis .
IEEE TRANSACTIONS ON NEURAL NETWORKS, 2006, 17 (04) :966-974
[7]
Cybenko G., 1989, Mathematics of Control, Signals, and Systems, V2, P303, DOI 10.1007/BF02551274
[8]
Error Minimized Extreme Learning Machine With Growth of Hidden Nodes and Incremental Learning [J].
Feng, Guorui ;
Huang, Guang-Bin ;
Lin, Qingping ;
Gay, Robert .
IEEE TRANSACTIONS ON NEURAL NETWORKS, 2009, 20 (08) :1352-1357
[9]
HAYIN S, 2009, NEURAL NETWORKS LEAR, V3rd
[10]
A generalized growing and pruning RBF (GGAP-RBF) neural network for function approximation [J].
Huang, GB ;
Saratchandran, P ;
Sundararajan, N .
IEEE TRANSACTIONS ON NEURAL NETWORKS, 2005, 16 (01) :57-67