Locality of global stochastic interaction in directed acyclic networks

被引:20
作者
Ay, N [1 ]
机构
[1] Max Planck Inst Math Sci, D-04103 Leipzig, Germany
关键词
D O I
10.1162/089976602760805368
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The hypothesis of invariant maximization of interaction (IMI) is formulated within the setting of random fields. According to this hypothesis, learning processes maximize the stochastic interaction of the neurons subject to constraints. We consider the extrinsic constraint in terms of a fixed input distribution on the periphery of the network. Our main intrinsic constraint is given by a directed acyclic network structure. First mathematical results about the strong relation of the local information flow and the global interaction are stated in order to investigate the possibility of controlling IMI optimization in a completely local way. Furthermore, we discuss some relations of this approach to the optimization according to Linsker's Infomax principle.
引用
收藏
页码:2959 / 2980
页数:22
相关论文
共 27 条
[1]   Natural gradient works efficiently in learning [J].
Amari, S .
NEURAL COMPUTATION, 1998, 10 (02) :251-276
[2]   Information geometry on hierarchy of probability distributions [J].
Amari, S .
IEEE TRANSACTIONS ON INFORMATION THEORY, 2001, 47 (05) :1701-1711
[3]  
Amari S., 1985, DIFFERENTIAL GEOMETR
[4]  
AMARI SI, 2000, METHODS INFORMATION
[5]   SOME INFORMATIONAL ASPECTS OF VISUAL PERCEPTION [J].
ATTNEAVE, F .
PSYCHOLOGICAL REVIEW, 1954, 61 (03) :183-193
[6]  
Ay N, 2002, ANN PROBAB, V30, P416
[7]  
AY N, 2001, UNPUB INFORMATIONG E
[8]  
AY N, 2001, UNPUB DYNAMICAL PROP
[9]   Redundancy reduction revisited [J].
Barlow, H .
NETWORK-COMPUTATION IN NEURAL SYSTEMS, 2001, 12 (03) :241-253
[10]   Unsupervised Learning [J].
Barlow, H. B. .
NEURAL COMPUTATION, 1989, 1 (03) :295-311