A MARKOVIAN EXTENSION OF VALIANTS LEARNING-MODEL

被引:8
作者
ALDOUS, D [1 ]
VAZIRANI, U [1 ]
机构
[1] UNIV CALIF BERKELEY,DEPT COMP SCI,BERKELEY,CA 94720
关键词
D O I
10.1006/inco.1995.1037
中图分类号
TP301 [理论、方法];
学科分类号
081202 ;
摘要
An ''Occam algorithm'' learning model maintains a tentative hypothesis consistent with past observations and, when a new observation is inconsistent with the current hypothesis, updates to the next-simplest hypothesis consistent with all observations. In previous work, observations were assumed to be stochastically independent. This paper initiates study of such models under weaker Markovian assumptions on the observations, In the special case where the sequence of hypotheses satisfies a monotonicity condition, it is shown that the number of mistakes in classifying the first t observations is O(root t log 1/pi(i)), where pi(i) is the stationary probability of the initial state, i, Of the Markov chain. (C) 1995 Academic Press, Inc.
引用
收藏
页码:181 / 186
页数:6
相关论文
共 9 条
[1]  
ALDOUS D, 1991, SEL P SHEFF S APPL P, P12
[2]   OCCAM RAZOR [J].
BLUMER, A ;
EHRENFEUCHT, A ;
HAUSSLER, D ;
WARMUTH, MK .
INFORMATION PROCESSING LETTERS, 1987, 24 (06) :377-380
[3]  
BOARD R, 1990, 22ND P ACM S THEOR C, P54
[4]  
CARNE TK, 1985, B SCI MATH, V109, P399
[5]  
Durrett R, 1991, PROBABILITY THEORY E
[6]  
Haussler D., 1988, 29th Annual Symposium on Foundations of Computer Science (IEEE Cat. No.88CH2652-6), P100, DOI 10.1109/SFCS.1988.21928
[7]  
LITTLESTONE N, 1989, 30TH P IEEE S F COMP
[8]  
LITTLESTONE N, 1987, MACH LEARN, V2, P285
[9]  
VALIANT LG, 1984, COMMUN ACM, V11, P1134