TRAINING NEURAL NETS THROUGH STOCHASTIC MINIMIZATION

被引:13
作者
BRUNELLI, R
机构
关键词
STOCHASTIC OPTIMIZATION; LEARNING ALGORITHMS; BACK PROPAGATION;
D O I
10.1016/0893-6080(94)90088-4
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The revival of multilayer neural networks in the mid 1980s originated from the discovery of the back propagation technique as a feasible training procedure. in spite of its shortcomings, it is probably the most widespread technique for training feedforward nets. In recent years, several deterministic methods more efficient than back propagation have been proposed. In this paper a stochastic minimization algorithm, the iterated adaptive memory stochastic search, is described that does not use gradient information and is found to perform better than back propagation on the encoder and parity problems.
引用
收藏
页码:1405 / 1412
页数:8
相关论文
共 32 条