Evolutionary artificial neural networks by multi-dimensional particle swarm optimization

被引:197
作者
Kiranyaz, Serkan [1 ]
Ince, Turker [2 ]
Yildirim, Alper [3 ]
Gabbouj, Moncef
机构
[1] Tampere Univ Technol, Signal Proc Dept, FIN-33101 Tampere, Finland
[2] Izmir Univ Econ, Izmir, Turkey
[3] TUBITAK, Ankara, Turkey
关键词
Particle swarm optimization; Multi-dimensional search; Evolutionary artificial neural networks and multi-layer perceptrons; ALGORITHM;
D O I
10.1016/j.neunet.2009.05.013
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In this paper, we propose a novel technique for the automatic design of Artificial Neural Networks (ANNs) by evolving to the optimal network configuration(s) within an architecture space. It is entirely based on a multi-dimensional Particle Swarm Optimization (MD PSO) technique, which re-forms the native structure of swarm particles in such a way that they can make inter-dimensional passes with a dedicated dimensional PSO process. Therefore, in a multidimensional search space where the optimum dimension is unknown, swarm particles can seek both positional and dimensional optima. This eventually removes the necessity of setting a fixed dimension a priori, which is a common drawback for the family of swarm optimizers. With the proper encoding of the network configurations and parameters into particles, MID PSO can then seek the positional optimum in the error space and the dimensional optimum in the architecture space. The optimum dimension converged at the end of a MD PSO process corresponds to a unique ANN configuration where the network parameters (connections, weights and biases) can then be resolved from the positional optimum reached on that dimension. In addition to this, the proposed technique generates a ranked list of network configurations, from the best to the worst. This is indeed a crucial piece of information, indicating what potential configurations can be alternatives to the best one, and which configurations should not be used at all for a particular problem. In this study, the architecture space is defined over feed-forward, fully-connected ANNs so as to use the conventional techniques such as back-propagation and some other evolutionary methods in this field. The proposed technique is applied over the most challenging synthetic problems to test its optimality on evolving networks and over the benchmark problems to test its generalization capability as well as to make comparative evaluations with the several competing techniques. The experimental results show that the MD PSO evolves to optimum or near-optimum networks in general and has a superior generalization capability. Furthermore, the MID PSO naturally favors a low-dimension solution when it exhibits a competitive performance with a high dimension counterpart and such a native tendency eventually yields the evolution process to the compact network configurations in the architecture space rather than the complex ones, as long as the optimality prevails. (C) 2009 Elsevier Ltd. All rights reserved.
引用
收藏
页码:1448 / 1462
页数:15
相关论文
共 67 条
[1]  
Abraham A., 2007, SOFT COMPUTING KNOWL, P279
[2]   AN EVOLUTIONARY ALGORITHM THAT CONSTRUCTS RECURRENT NEURAL NETWORKS [J].
ANGELINE, PJ ;
SAUNDERS, GM ;
POLLACK, JB .
IEEE TRANSACTIONS ON NEURAL NETWORKS, 1994, 5 (01) :54-65
[3]  
[Anonymous], 1988, P 1988 CONN MOD SUMM, DOI DOI 10.13140/2.1.3459.2329
[4]  
[Anonymous], ADV NEURAL INFORMATI
[5]  
[Anonymous], IEEE T EVOLUTIONARY
[6]  
[Anonymous], P 7 INT C HYBR INT S
[7]  
BACK T, 1995, FUZZY LOGIC SOFT COM, P3
[8]   An Overview of Evolutionary Algorithms for Parameter Optimization [J].
Baeck, Thomas ;
Schwefel, Hans-Paul .
EVOLUTIONARY COMPUTATION, 1993, 1 (01) :1-23
[9]  
Bartlett P., 1990, Training a neural network with a genetic algorithm
[10]  
BELEW RK, 1991, CS90174 U CAL COMP S