DEMOCRATIC REINFORCEMENT - A PRINCIPLE FOR BRAIN-FUNCTION

被引:41
作者
STASSINOPOULOS, D
BAK, P
机构
[1] Brookhaven National Laboratory, Upton
来源
PHYSICAL REVIEW E | 1995年 / 51卷 / 05期
关键词
D O I
10.1103/PhysRevE.51.5033
中图分类号
O35 [流体力学]; O53 [等离子体物理学];
学科分类号
070204 ; 080103 ; 080704 ;
摘要
We introduce a simple ''toy'' brain model. The model consists of a set of randomly connected, or layered integrate-and-fire neurons. Inputs to and outputs from the environment are connected randomly to subsets of neurons. The connections between firing neurons are strengthened or weakened according to whether the action was successful or not. Unlike previous reinforcement learning algorithms, the feedback from the environment is democratic: it affects all neurons in the same way, irrespective of their position in the network and independent of the output signal. Thus no unrealistic back propagation or other external computation is needed. This is accomplished by a global threshold regulation which allows the system to self-organize into a highly susceptible, possibly ''critical'' state with low activity and sparse connections between firing neurons. The low activity permits memory in quiescent areas to be conserved since only firing neurons are modified when new information is being taught. © 1995 The American Physical Society.
引用
收藏
页码:5033 / 5039
页数:7
相关论文
共 6 条
[1]  
Amit D.J., Modelling Brain Function: The World of Attractor Neural Networks, (1989)
[2]  
Hertz J., Krogh A., Palmer R.G., Introduction to the Theory of Neural Computation, (1991)
[3]  
Barto A.G., Anandan P., Trans. Syst. Man IEEE, Cybern., 15, (1985)
[4]  
Barto A.G., Human Neurobiology, 4, (1985)
[5]  
Alstrom P., Stassinopoulos D., Phys. Rev. E, 51, (1995)
[6]  
Bak P., Tang C., Wiesenfeld K., Phys. Rev. Lett., 59, (1987)