Deterministic nonmonotone strategies for effective training of multilayer perceptrons

被引:26
作者
Plagianakos, VP [1 ]
Magoulas, GD
Vrahatis, MN
机构
[1] Univ Patras, Dept Math, GR-26110 Patras, Greece
[2] Univ Patras, UP Artificial Intelligence Res Ctr, GR-26110 Patras, Greece
[3] Brunel Univ, Dept Informat Syst & Comp, Uxbridge UB8 3PH, Middx, England
来源
IEEE TRANSACTIONS ON NEURAL NETWORKS | 2002年 / 13卷 / 06期
关键词
adaptive learning rate algorithms; backpropagation (BP) algorithm; multilayer perceptrons (MLPs); nonmonotone minimization; unconstrained minimization;
D O I
10.1109/TNN.2002.804225
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In this paper, we present deterministic nonmonotone learning strategies for multilayer perceptrons (MLPs), i.e., deterministic training algorithms in which error function values are allowed to increase at some epochs. To this end, we argue that the current error function value must satisfy a nonmonotone criterion with respect to the maximum error function value of the M previous epochs, and we propose a subprocedure to dynamically compute M. The nonmonotone strategy can be incorporated in any batch training algorithm and provides fast, stable, and reliable learning. Experimental results in different classes of problems show that this approach improves the convergence speed and success percentage of first-order training algorithms and alleviates the need for fine-tuning problem-depended heuristic parameters.
引用
收藏
页码:1268 / 1284
页数:17
相关论文
共 74 条