A SCALED CONJUGATE-GRADIENT ALGORITHM FOR FAST SUPERVISED LEARNING

被引:2770
作者
MOLLER, MF
机构
关键词
FEEDFORWARD NEURAL NETWORK; SUPERVISED LEARNING; OPTIMIZATION; CONJUGATE GRADIENT ALGORITHMS;
D O I
10.1016/S0893-6080(05)80056-5
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
A supervised learning algorithm (Scaled Conjugate Gradient, SCG) is introduced. The performance of SCG is benchmarked against that of the standard back propagation algorithm (BP) (Rumelhart, Hinton, & Williams, 1986), the conjugate gradient algorithm with line search (CGL) (Johansson, Dowla, & Goodman, 1990) and the one-step Broyden-Fletcher-Goldfarb-Shanno memoriless quasi-Newton algorithm (BFGS) (Battiti, 1990). SCG is fully-automated, includes no critical user-dependent parameters, and avoids a time consuming line search, which CGL and BFGS use in each iteration in order to determine an appropriate step size. Experiments show that SCG is considerably faster than BP, CGL, and BFGS.
引用
收藏
页码:525 / 533
页数:9
相关论文
共 19 条
  • [1] [Anonymous], 1987, LEARNING INTERNAL RE
  • [2] [Anonymous], 2016, LINEAR NONLINEAR PRO
  • [3] Battiti R., 1990, INT JOINT C NEUR NET, V1, P593
  • [4] Battiti R., 1989, COMPLEX SYSTEMS, V3, P331
  • [5] Battiti R., 1990, INCC 90 PAR INT NEUR, V2, P757
  • [6] Fletcher R, 1975, PRACTICAL METHODS OP
  • [7] GILL PE, 1974, NPL NAC37 DIV NUM AN
  • [8] Hestenes M, 1990, CONJUGATE DIRECTION
  • [9] CONNECTIONIST LEARNING PROCEDURES
    HINTON, GE
    [J]. ARTIFICIAL INTELLIGENCE, 1989, 40 (1-3) : 185 - 234
  • [10] Johansson E. M., 1991, International Journal of Neural Systems, V2, P291, DOI 10.1142/S0129065791000261