Let A be a Las Vegas algorithm, i.e., A is a randomized algorithm that always produces the correct answer when it stops but whose running time is a random variable. We consider the problem of minimizing the expected time required to obtain an answer from A using strategies which simulate A as follows: run A for a fixed amount of time t1, then run A independently for a fixed amount of time t2, etc. The simulation stops if A completes its execution during any of the runs. Let J = (t1, t2, . . .) be a strategy, and let l(A) = inf(J)T(A, J), where T(A, J) is the expected value of the running time of the simulation of A under strategy J. We describe a simple universal strategy J(univ), with the property that, for any algorithm A, T(A, J(univ)) = O(l(A) log(l(A))). Furthermore, we show that this is the best performance that can be achieved, up to a constant factor, by any universal strategy.