Leap-frog is a robust algorithm for training neural networks

被引:10
作者
Holm, JEW [1 ]
Botha, EC [1 ]
机构
[1] Univ Pretoria, Dept Elect & Elect Engn, ZA-0002 Pretoria, South Africa
关键词
D O I
10.1088/0954-898X/10/1/001
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Optimization of perceptron neural network classifiers requires an optimization algorithm that is robust. In general, the best network is selected after a number of optimization trials. An effective optimization algorithm generates good weight-vector solutions in a few optimization trial runs owing to its inherent ability to escape local minima, where a less effective algorithm requires a larger number of trial runs. Repetitive training and testing is a tedious process, so that an effective algorithm is desirable to reduce training time and increase the quality of the set of available weight-vector solutions. We present leap-frog as a robust optimization algorithm for training neural networks. In this paper the dynamic principles of leap-frog are described together with experiments to show the ability of leap-frog to generate reliable weight-vector solutions. Performance histograms are used to compare leap-frog with a variable-metric method, a conjugate-gradient method with modified restarts, and a constrained-momentum-based algorithm. Results indicate that leap-frog performs better in terms of classification error than the remaining three algorithms on two distinctly different test problems.
引用
收藏
页码:1 / 13
页数:13
相关论文
共 20 条
[1]  
*ARPA, 1990, DARPA TIMIT AC PHON
[2]   GRADIENT DESCENT LEARNING ALGORITHM OVERVIEW - A GENERAL DYNAMICAL-SYSTEMS PERSPECTIVE [J].
BALDI, P .
IEEE TRANSACTIONS ON NEURAL NETWORKS, 1995, 6 (01) :182-195
[3]   1ST-ORDER AND 2ND-ORDER METHODS FOR LEARNING - BETWEEN STEEPEST DESCENT AND NEWTON METHOD [J].
BATTITI, R .
NEURAL COMPUTATION, 1992, 4 (02) :141-166
[4]   What Size Net Gives Valid Generalization? [J].
Baum, Eric B. ;
Haussler, David .
NEURAL COMPUTATION, 1989, 1 (01) :151-160
[5]   AN ACCELERATED LEARNING ALGORITHM FOR MULTILAYER PERCEPTRONS - OPTIMIZATION LAYER-BY-LAYER [J].
ERGEZINGER, S ;
THOMSEN, E .
IEEE TRANSACTIONS ON NEURAL NETWORKS, 1995, 6 (01) :31-42
[6]  
FANTY M, 1996, TRAINING NEURAL NETW
[7]  
Feltcher R, 1980, PRACTICAL METHODS OP, V1
[8]   Optimal convergence of on-line backpropagation [J].
Gori, M ;
Maggini, M .
IEEE TRANSACTIONS ON NEURAL NETWORKS, 1996, 7 (01) :251-254
[9]  
GREENSPAN D, 1973, APPL MATH COMPUT, P107
[10]   AN EFFICIENT CONSTRAINED TRAINING ALGORITHM FOR FEEDFORWARD NETWORKS [J].
KARRAS, DA ;
PERANTONIS, SJ .
IEEE TRANSACTIONS ON NEURAL NETWORKS, 1995, 6 (06) :1420-1434