COMPARING RECENT ASSUMPTIONS FOR THE EXISTENCE OF AVERAGE OPTIMAL STATIONARY POLICIES

被引:37
作者
CAVAZOSCADENA, R
SENNOTT, LI
机构
[1] ILLINOIS STATE UNIV,DEPT MATH,NORMAL,IL 61761
[2] UNIV AUTONOMA AGR ANTONIO NARRO,SALTILLO,MEXICO
关键词
MARKOV DECISION PROCESSES; AVERAGE COST CRITERION; OPTIMAL STATIONARY POLICIES;
D O I
10.1016/0167-6377(92)90059-C
中图分类号
C93 [管理学]; O22 [运筹学];
学科分类号
070105 ; 12 ; 1201 ; 1202 ; 120202 ;
摘要
We consider discrete time average cost Markov decision processes with countable state space and finite action sets. Conditions recently proposed by Borkar, Cavazos-Cadena, Weber and Stidham, and Sennott for the existence of an expected average cost optimal stationary policy are compared. The conclusion is that the Sennott conditions are the weakest. We also give an example for which the Sennott axioms hold but the others fail.
引用
收藏
页码:33 / 37
页数:5
相关论文
共 8 条
[1]   CONTROL OF MARKOV-CHAINS WITH LONG-RUN AVERAGE COST CRITERION - THE DYNAMIC-PROGRAMMING EQUATIONS [J].
BORKAR, VS .
SIAM JOURNAL ON CONTROL AND OPTIMIZATION, 1989, 27 (03) :642-657
[2]   ON MINIMUM COST PER UNIT TIME CONTROL OF MARKOV-CHAINS [J].
BORKAR, VS .
SIAM JOURNAL ON CONTROL AND OPTIMIZATION, 1984, 22 (06) :965-978
[3]  
CAVAZOSCADENA R, 1989, KYBERNETIKA, V25, P145
[4]  
Ross S.M., 2014, INTRO STOCHASTIC DYN
[5]  
Sennott L., 1989, PROBAB ENG INFORM SC, V3, P247
[8]   OPTIMAL-CONTROL OF SERVICE RATES IN NETWORKS OF QUEUES [J].
WEBER, RR ;
STIDHAM, S .
ADVANCES IN APPLIED PROBABILITY, 1987, 19 (01) :202-218