ON FOKKER-PLANCK APPROXIMATIONS OF ONLINE LEARNING-PROCESSES

被引:18
作者
HESKES, T
机构
[1] UNIV ILLINOIS,DEPT PHYS,URBANA,IL 61801
[2] CATHOLIC UNIV NIJMEGEN,DEPT MED PHYS & BIOPHYS,6525 EZ NIJMEGEN,NETHERLANDS
来源
JOURNAL OF PHYSICS A-MATHEMATICAL AND GENERAL | 1994年 / 27卷 / 15期
关键词
D O I
10.1088/0305-4470/27/15/015
中图分类号
O4 [物理学];
学科分类号
0702 ;
摘要
There are several ways to describe on-line learning in neural networks. The two major ones are a continuous-time master equation and a discrete-time random-walk equation. The random-walk equation is obtained in the case of fixed time intervals between subsequent learning steps, the master equation results when the time intervals are drawn from a Poisson distribution. Following Van Kampen, we give a rigorous expansion of both the master and the random-walk equation in the limit of small learning parameters. The results explain the difference between the Fokker-Planck approaches proposed by Radons et al and Hansen et al. Furthermore, we find that the mathematical validity of these approaches is restricted to local properties of the learning process. Yet Fokker-Planck approaches are often suggested as models to study global properties, such as mean first passage times and stationary solutions. To check their accuracy and usefulness in these situations we compare simulations of two learning procedures with exactly the same drift vector and diffusion matrix, the only moments that are considered in Fokker-Planck approximation. The simulations show that the mean first passage times for these two learning procedures diverge rather than converge for small learning parameters. We reach the conclusion that Fokker-Planck approaches are not accurate enough to compute global properties of on-line learning processes.
引用
收藏
页码:5145 / 5160
页数:16
相关论文
共 23 条
[1]   RELATION BETWEEN MASTER EQUATIONS AND RANDOM WALKS AND THEIR SOLUTIONS [J].
BEDEAUX, D ;
LAKATOSL.K ;
SHULER, KE .
JOURNAL OF MATHEMATICAL PHYSICS, 1971, 12 (10) :2116-+
[2]  
FINNOFF W, 1992, ADV NEURAL INFORMATI, V5, P459
[3]   ON THE PROBLEM OF LOCAL MINIMA IN BACKPROPAGATION [J].
GORI, M ;
TESI, A .
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 1992, 14 (01) :76-86
[4]  
GROSSBERG S, 1969, J STATISTICAL PHYSIC, V48, P105
[5]   STOCHASTIC DYNAMICS OF SUPERVISED LEARNING [J].
HANSEN, LK ;
PATHRIA, R ;
SALAMON, P .
JOURNAL OF PHYSICS A-MATHEMATICAL AND GENERAL, 1993, 26 (01) :63-71
[6]  
HANSON SJ, 1989, ADV NEURAL INFORMATI, V1, P177
[7]  
HESKES T, 1984, P EUROEPAN S ARTIFIC, P223
[8]   LEARNING IN NEURAL NETWORKS WITH LOCAL MINIMA [J].
HESKES, TM ;
SLIJPEN, ETP ;
KAPPEN, B .
PHYSICAL REVIEW A, 1992, 46 (08) :5221-5231
[9]  
HESKES TM, 1993, MATH FDN NEURAL NETW, P199
[10]   SELF-ORGANIZED FORMATION OF TOPOLOGICALLY CORRECT FEATURE MAPS [J].
KOHONEN, T .
BIOLOGICAL CYBERNETICS, 1982, 43 (01) :59-69