An optimal algorithm for Monte Carlo estimation

被引:159
作者
Dagum, P [1 ]
Karp, R
Luby, M
Ross, S
机构
[1] Stanford Univ, Sch Med, Sect Med Informat, Stanford, CA 94305 USA
[2] Int Comp Sci Inst, Berkeley, CA 94704 USA
[3] Univ Calif Berkeley, Div Comp Sci, Berkeley, CA 94704 USA
[4] Univ Calif Berkeley, Dept Ind Engn & Operat Res, Berkeley, CA 94704 USA
关键词
stopping rule; approximation algorithm; Monte Carlo estimation; sequential estimation; stochastic approximation;
D O I
10.1137/S0097539797315306
中图分类号
TP301 [理论、方法];
学科分类号
081202 ;
摘要
A typical approach to estimate an unknown quantity mu is to design an experiment that produces a random variable Z distributed in [0, 1] with E[Z] = mu, run this experiment independently a number of times, and use the average of the outcomes as the estimate. In this paper, we consider the case when no a priori information about Z is known except that is distributed in [ 0, 1]. We describe an approximation algorithm AA which, given and, when running independent experiments with respect to any Z, produces an estimate that is within a factor 1+epsilon of mu with probability at least 1-delta. We prove that the expected number of experiments run by AA (which depends on Z) is optimal to within a constant factor for every Z.
引用
收藏
页码:1484 / 1496
页数:13
相关论文
共 22 条
  • [21] Sipser Michael, 1983, PROC 15 ANN ACM S TH, P330
  • [22] Wald A, 2004, SEQUENTIAL ANAL