Comparison of particle swarm optimization and backpropagation as training algorithms for neural networks

被引:293
作者
Gudise, VG [1 ]
Venayagamoorthy, GK [1 ]
机构
[1] Univ Missouri, Dept Elect & Comp Engn, Rolla, MO 65401 USA
来源
PROCEEDINGS OF THE 2003 IEEE SWARM INTELLIGENCE SYMPOSIUM (SIS 03) | 2003年
关键词
D O I
10.1109/SIS.2003.1202255
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Particle swarm optimization (PSO) motivated by the social behavior of organisms, is a step up to existing evolutionary algorithms for optimization of continuous nonlinear functions. Backpropagation (BP) is generally used for neural network training. Choosing a proper algorithm for training a neural network is very important. In this paper, a comparative study is made on the computational requirements of the PSO and BP as training algorithms for neural networks. Results are presented for a feedforward neural network learning a non-linear function and these results show that the feedforward neural network weights converge faster with the PSO than with the BP algorithm.
引用
收藏
页码:110 / 117
页数:8
相关论文
共 26 条
[1]  
[Anonymous], PRINCIPLES NEUROCOMP
[2]  
[Anonymous], S AFRICAN COMPUTER J
[3]   1ST-ORDER AND 2ND-ORDER METHODS FOR LEARNING - BETWEEN STEEPEST DESCENT AND NEWTON METHOD [J].
BATTITI, R .
NEURAL COMPUTATION, 1992, 4 (02) :141-166
[4]  
BURTON B, 1994, IEEE IND APPLIC SOC, P1836, DOI 10.1109/IAS.1994.377679
[5]  
CHEN CH, 1992, P OCEANS 92 MAST OC, V1, P132
[6]  
Eberhart RC, 2001, IEEE C EVOL COMPUTAT, P81, DOI 10.1109/CEC.2001.934374
[7]   TRAINING FEEDFORWARD NETWORKS WITH THE MARQUARDT ALGORITHM [J].
HAGAN, MT ;
MENHAJ, MB .
IEEE TRANSACTIONS ON NEURAL NETWORKS, 1994, 5 (06) :989-993
[8]  
Hagan MT., 1996, NEURAL NETWORK DESIG
[9]  
Kennedy J., 2001, SWARM INTELLIGENCE
[10]  
Kennedy J., 1995, P ICNN 95 INT C NEUR, VVolume 4, P1942