Multi-armed bandits with episode context

被引:131
作者
Rosin, Christopher D. [1 ]
机构
[1] Parity Comp Inc, San Diego, CA 92121 USA
关键词
Computational learning theory; Multi-armed bandits; Contextual bandits; UCB; PUCB; Computer Go; COMPUTER GO; EXPLORATION;
D O I
10.1007/s10472-011-9258-6
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
A multi-armed bandit episode consists of n trials, each allowing selection of one of K arms, resulting in payoff from a distribution over [0,1] associated with that arm. We assume contextual side information is available at the start of the episode. This context enables an arm predictor to identify possible favorable arms, but predictions may be imperfect so that they need to be combined with further exploration during the episode. Our setting is an alternative to classical multi-armed bandits which provide no contextual side information, and is also an alternative to contextual bandits which provide new context each individual trial. Multi-armed bandits with episode context can arise naturally, for example in computer Go where context is used to bias move decisions made by a multi-armed bandit algorithm. The UCB1 algorithm for multi-armed bandits achieves worst-case regret bounded by O(root Knlog(n). We seek to improve this using episode context, particularly in the case where K is large. Using a predictor that places weight M (i) > 0 on arm i with weights summing to 1, we present the PUCB algorithm which achieves regret O (1/M, root n log(n) where M (auaEuro parts per thousand) is the weight on the optimal arm. We illustrate the behavior of PUCB with small simulation experiments, present extensions that provide additional capabilities for PUCB, and describe methods for obtaining suitable predictors for use with PUCB.
引用
收藏
页码:203 / 230
页数:28
相关论文
共 31 条
[1]  
[Anonymous], 2007, ICML 2007
[2]   Exploration-exploitation tradeoff using variance estimates in multi-armed bandits [J].
Audibert, Jean-Yves ;
Munos, Remi ;
Szepesvari, Csaba .
THEORETICAL COMPUTER SCIENCE, 2009, 410 (19) :1876-1902
[3]  
Auer P, 2003, SIAM J COMPUT, V32, P48, DOI 10.1137/S0097539701398375
[4]   Finite-time analysis of the multiarmed bandit problem [J].
Auer, P ;
Cesa-Bianchi, N ;
Fischer, P .
MACHINE LEARNING, 2002, 47 (2-3) :235-256
[5]   Computer go: An AI oriented survey [J].
Bouzy, B ;
Cazenave, T .
ARTIFICIAL INTELLIGENCE, 2001, 132 (01) :39-103
[6]  
BOUZY B, 2005, IEEE 2005 S COMP INT, P176
[7]  
Bouzy B., 2003, ACG, V263, P159
[8]  
Bubeck S., 2009, 22 ANN C LEARN THEOR
[9]  
Bubeck S, 2009, LECT NOTES ARTIF INT, V5809, P23, DOI 10.1007/978-3-642-04414-4_7
[10]  
Cai XD, 2007, STUD COMPUT INTELL, V63, P443