Recurrent Infomax Generates Cell Assemblies, Neuronal Avalanches, and Simple Cell-Like Selectivity

被引:45
作者
Tanaka, Takuma [1 ]
Kaneko, Takeshi [1 ,2 ]
Aoyagi, Toshio [2 ,3 ]
机构
[1] Kyoto Univ, Grad Sch Med, Dept Morphol Brain Sci, Kyoto 6068501, Japan
[2] Japan Sci & Technol Agcy, CREST, Kawaguchi, Saitama 3320012, Japan
[3] Kyoto Univ, Dept Appl Anal & Complex Dynam Syst, Grad Sch Informat, Kyoto 6068501, Japan
关键词
SEQUENCES; REPLAY; PROPAGATION; EMERGENCE; NETWORKS; DYNAMICS; SPIKING; STATES; RULE;
D O I
10.1162/neco.2008.03-08-727
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Recently multineuronal recording has allowed us to observe patterned firings, synchronization, oscillation, and global state transitions in the recurrent networks of central nervous systems. We propose a learning algorithm based on the process of information maximization in a recurrent network, which we call recurrent infomax (RI). RI maximizes information retention and thereby minimizes information loss through time in a network. We find that feeding in external inputs consisting of information obtained from photographs of natural scenes into an RI-based model of a recurrent network results in the appearance of Gabor-like selectivity quite similar to that existing in simple cells of the primary visual cortex. We find that without external input, this network exhibits cell assembly-like and synfire chain-like spontaneous activity as well as a critical neuronal avalanche. In addition, we find that RI embeds externally input temporal firing patterns to the network so that it spontaneously reproduces these patterns after learning. RI provides a simple framework to explain a wide range of phenomena observed in in vivo and in vitro neuronal networks, and it will provide a novel understanding of experimental results for multineuronal activity and plasticity from an information-theoretic point of view.
引用
收藏
页码:1038 / 1067
页数:30
相关论文
共 36 条
[1]   A simple growth model constructs critical avalanche networks [J].
Abbott, L. F. ;
Rohrkemper, R. .
COMPUTATIONAL NEUROSCIENCE: THEORETICAL INSIGHTS INTO BRAIN FUNCTION, 2007, 165 :13-19
[2]  
[Anonymous], 1991, CORTICONICS
[3]   Could information theory provide an ecological theory of sensory processing? [J].
Aticky, Joseph J. .
NETWORK-COMPUTATION IN NEURAL SYSTEMS, 2011, 22 (1-4) :4-44
[4]   Locality of global stochastic interaction in directed acyclic networks [J].
Ay, N .
NEURAL COMPUTATION, 2002, 14 (12) :2959-2980
[5]   SELF-ORGANIZED CRITICALITY - AN EXPLANATION OF 1/F NOISE [J].
BAK, P ;
TANG, C ;
WIESENFELD, K .
PHYSICAL REVIEW LETTERS, 1987, 59 (04) :381-384
[6]  
Beggs JM, 2003, J NEUROSCI, V23, P11167
[7]   AN INFORMATION MAXIMIZATION APPROACH TO BLIND SEPARATION AND BLIND DECONVOLUTION [J].
BELL, AJ ;
SEJNOWSKI, TJ .
NEURAL COMPUTATION, 1995, 7 (06) :1129-1159
[8]   The ''independent components'' of natural scenes are edge filters [J].
Bell, AJ ;
Sejnowski, TJ .
VISION RESEARCH, 1997, 37 (23) :3327-3338
[9]   A learning rule for the emergence of stable dynamics and timing in recurrent networks [J].
Buonomano, DV .
JOURNAL OF NEUROPHYSIOLOGY, 2005, 94 (04) :2275-2283
[10]   Attractor dynamics of network UP states in the neocortex [J].
Cossart, R ;
Aronov, D ;
Yuste, R .
NATURE, 2003, 423 (6937) :283-288