共 1 条
CONVERGENCE OF ONLINE GRADIENT METHOD WITH A PENALTY TERM FOR FEEDFORWARD NEURAL NETWORKS WITH STOCHASTIC INPUTS
被引:2
作者:
邵红梅
吴微
李峰
机构:
[1] Department of Applied Mathematics
[2] Dalian University of Technology
[3] Dalian
[4] PRC
关键词:
Feedforward neural network;
Online gradient algorithm;
Penalty term;
Stochastic input;
Convergence;
Monotonicity;
Boundedness;
D O I:
暂无
中图分类号:
O171 [分析基础];
学科分类号:
0701 ;
070101 ;
摘要:
<正> Online gradient algorithm has been widely used as a learning algorithm for feedforward neural network training. In this paper, we prove a weak convergence theorem of an online gradient algorithm with a penalty term, assuming that the training examples are input in a stochastic way. The monotonicity of the error function in the iteration and the boundedness of the weight are both guaranteed. We also present a numerical experiment to support our results.
引用
收藏
页码:87 / 96
页数:10
相关论文