Reinforcement learning: A survey

被引:4500
作者
Kaelbling, LP [1 ]
Littman, ML [1 ]
Moore, AW [1 ]
机构
[1] CARNEGIE MELLON UNIV, PITTSBURGH, PA 15213 USA
关键词
D O I
10.1613/jair.301
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
This paper surveys the field of reinforcement learning from a computer-science perspective. It is written to be accessible to researchers familiar with machine learning. Both the historical basis of the field and a broad selection of current work are summarized. Reinforcement learning is the problem faced by an agent that learns behavior through trial-and-error interactions with a dynamic environment. The work described here has a resemblance to work in psychology, but differs considerably in the details and in the use of the word ''reinforcement.'' The paper discusses central issues of reinforcement learning, including trading off exploration and exploitation, establishing the foundations of the field via Markov decision theory, learning from delayed reinforcement, constructing empirical models to accelerate learning, making use of generalization and hierarchy, and coping with hidden state. It concludes with a survey of some implemented systems and an assessment of the practical utility of current methods for reinforcement learning.
引用
收藏
页码:237 / 285
页数:49
相关论文
共 123 条
[1]  
ACKLEY DH, 1990, ADV NEURAL INFORMATI, V2, P550
[2]  
ALBUS JS, 1981, BRAINS BEHAVIOR ROBO
[3]  
ANDERSON CW, 1986, THESIS U MASSACHUSET
[4]  
[Anonymous], 1991, ADV NEURAL INFORM PR
[5]  
[Anonymous], ANIMALS ANIMATS
[6]  
[Anonymous], J DYN SYST MEAS CONT, DOI DOI 10.1115/1.3426922
[7]  
[Anonymous], 1993, NEURAL NETWORK PERCE
[8]  
[Anonymous], PROC ICML
[9]  
[Anonymous], 1986, STOCHASTIC OPTIMAL C
[10]  
ASHAR RR, 1994, THESIS BROWN U PROVI