Sample path optimality for a Markov optimization problem

被引:6
作者
Hunt, FY [1 ]
机构
[1] Natl Inst Stand & Technol, Div Math & Comp Sci, Gaithersburg, MD 20899 USA
关键词
Markov decision process; stochastic control; Azuma's inequality;
D O I
10.1016/j.spa.2004.12.005
中图分类号
O21 [概率论与数理统计]; C8 [统计学];
学科分类号
020208 ; 070103 ; 0714 ;
摘要
We study a unichain Markov decision process i.e. a controlled Markov process whose state process under a stationary policy is an ergodic Markov chain. Here the state and action spaces are assumed to be either finite or countable. When the state process is uniformly ergodic and the immediate cost is bounded then a policy that minimizes the long-term expected average cost also has an nth stage sample path cost that with probability one is asymptotically less than the nth stage sample path cost under any other non-optimal stationary policy with a larger expected average cost. This is a strengthening in the Markov model case of the a.s. asymptotically optimal property frequently discussed in the literature. (c) 2005 Elsevier B.V. All rights reserved.
引用
收藏
页码:769 / 779
页数:11
相关论文
共 15 条
[11]   Pathwise optimality in stochastic control [J].
Pra, PD ;
Di Masi, GB ;
Trivellato, B .
SIAM JOURNAL ON CONTROL AND OPTIMIZATION, 2000, 39 (05) :1540-1557
[12]  
Ross Sheldon M., 2000, PROBABILITY MODELS
[13]  
Ross SM, 1970, APPL PROBABILITY MOD
[14]  
Rotar V. I., 1997, Probability Theory
[15]  
Steele J.M., 1997, PROBABILITY THEORY C