STOCHASTIC DYNAMICS OF SUPERVISED LEARNING

被引:16
作者
HANSEN, LK
PATHRIA, R
SALAMON, P
机构
[1] SAN DIEGO STATE UNIV,DEPT MATH SCI,SAN DIEGO,CA 92182
[2] UNIV WATERLOO,DEPT PHYS,WATERLOO N2L 3G1,ONTARIO,CANADA
来源
JOURNAL OF PHYSICS A-MATHEMATICAL AND GENERAL | 1993年 / 26卷 / 01期
关键词
D O I
10.1088/0305-4470/26/1/011
中图分类号
O4 [物理学];
学科分类号
0702 ;
摘要
The stochastic evolution of adiabatic (slow) backpropagation training of a neural network is discussed and a Fokker-Planck equation for the post-training distribution function in the network space is derived. The distribution we obtain differs from the one given by Radons et al. Studying the character of the post-training distribution, we find that, except under very special circumstances, the distribution will be non-Gibbsian. The validity of the present approach is tested on a simple backpropagation learning system in one dimension, which can be solved analytically as well. Implications of the Fokker-Planck approach for general situations are examined in the local linear approximation. Surprisingly we find that the post-training distribution is isotropic close to its peak, hence simpler than the corresponding Gibbs distribution.
引用
收藏
页码:63 / 71
页数:9
相关论文
共 8 条
[1]   PHASE-TRANSITIONS IN SIMPLE LEARNING [J].
HERTZ, JA ;
KROGH, A ;
THORBERGSSON, GI .
JOURNAL OF PHYSICS A-MATHEMATICAL AND GENERAL, 1989, 22 (12) :2133-2150
[2]  
LEVIN E, 1990, P IEEE, V78, P1574
[3]  
LEVIN E, 1989, 2ND P ANN WORKSH COM, P280
[4]  
RADONS G, 1990, PARALLEL PROCESSING IN NEURAL SYSTEMS AND COMPUTERS, P261
[5]  
Rumelhart DE, 1986, ENCY DATABASE SYST, P45
[6]   LEARNING FROM EXAMPLES IN LARGE NEURAL NETWORKS [J].
SOMPOLINSKY, H ;
TISHBY, N ;
SEUNG, HS .
PHYSICAL REVIEW LETTERS, 1990, 65 (13) :1683-1686
[7]  
Van Kampen N. G., 1983, STOCHASTIC PROCESSES
[8]  
Widrow B., 1985, ADAPTIVE SIGNAL PROC