Reinforcement learning based algorithms for average cost Markov Decision Processes

被引:7
作者
Abdulla, Mohammed Shahid [1 ]
Bhatnagar, Shalabh [1 ]
机构
[1] Indian Inst Sci, Dept Comp Sci & Automat, Bangalore 560012, Karnataka, India
来源
DISCRETE EVENT DYNAMIC SYSTEMS-THEORY AND APPLICATIONS | 2007年 / 17卷 / 01期
关键词
actor-critic algorithms; two timescale stochastic approximation; Markov Decision Processes; policy iteration; simultaneous perturbation stochastic approximation; normalized Hadamard matrices; reinforcement learning; TD-learning;
D O I
10.1007/s10626-006-0003-y
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 [计算机科学与技术];
摘要
This article proposes several two-timescale simulation-based actor-critic algorithms for solution of infinite horizon Markov Decision Processes with finite state-space under the average cost criterion. Two of the algorithms are for the compact (non-discrete) action setting while the rest are for finite-action spaces. On the slower timescale, all the algorithms perform a gradient search over corresponding policy spaces using two different Simultaneous Perturbation Stochastic Approximation (SPSA) gradient estimates. On the faster timescale, the differential cost function corresponding to a given stationary policy is updated and an additional averaging is performed for enhanced performance. A proof of convergence to a locally optimal policy is presented. Next, we discuss a memory efficient implementation that uses a feature-based representation of the state-space and performs TD(0) learning along the faster timescale. The TD(0) algorithm does not follow an on-line sampling of states but is observed to do well on our setting. Numerical experiments on a problem of rate based flow control are presented using the proposed algorithms. We consider here the model of a single bottleneck node in the continuous time queueing framework. We show performance comparisons of our algorithms with the two-timescale actor-critic algorithms of Konda and Borkar (1999) and Bhatnagar and Kumar (2004). Our algorithms exhibit more than an order of magnitude better performance over those of Konda and Borkar (1999).
引用
收藏
页码:23 / 52
页数:30
相关论文
共 26 条
[1]
ALTMAN E, 2001, HDB MARKOV DECISION
[2]
[Anonymous], 1976, Mathematics in Science and Engineering
[3]
Bertsekas D. P., 1996, Neuro Dynamic Programming, V1st
[4]
Actor-critic algorithms for hierarchical Markov decision processes [J].
Bhatnagar, S ;
Panigrahi, JR .
AUTOMATICA, 2006, 42 (04) :637-644
[5]
Bhatnagar S, 2001, IIE TRANS, V33, P245
[6]
Optimal structured feedback policies for ABR flow control using two-timescale SPSA [J].
Bhatnagar, S ;
Fu, MC ;
Marcus, SI ;
Fard, PJ .
IEEE-ACM TRANSACTIONS ON NETWORKING, 2001, 9 (04) :479-491
[7]
A simultaneous perturbation Stochastic approximation-based actor-critic algorithm for Markov decision processes [J].
Bhatnagar, S ;
Kumar, S .
IEEE TRANSACTIONS ON AUTOMATIC CONTROL, 2004, 49 (04) :592-598
[8]
BHATNAGAR S, 2005, UNPUB REINFORCEMENT
[9]
BHATNAGAR S, 2003, ACM T MODEL COMPUT S, V13, P180
[10]
Contemporary minimally invasive approaches to the management of acute cholecystitis: A review and appraisal [J].
Bhattacharya, D ;
Ammori, BJ .
SURGICAL LAPAROSCOPY ENDOSCOPY & PERCUTANEOUS TECHNIQUES, 2005, 15 (01) :1-8