A recent innovation in computational learning theory is the statistical query (SQ) model. The advantage of specifying learning algorithms in this model is that SO algorithms can be simulated in the probably approximately correct (PAC) model, both in the absence and in the presence of noise. However, simulations of SO algorithms in the PAC model have non-optimal time and sample complexities. In this paper, we introduce a new method for specifying statistical query algorithms based on a type of relative error and provide simulations in the noise-free and noise-tolerant PAC models which yield more efficient algorithms. Requests for estimates of statistics in this new model take the following form: "Return an estimate of the statistic within a 1 +/- mu factor, or return perpendicular to, promising that the statistic is less than theta." In addition to showing that this is a very natural language for specifying learning algorithms, we also shaw that this new specification is polynomially equivalent to standard SQ, and thus, known learnability and hardness results for statistical query learning are preserved. We then give highly efficient PAC simulations of relative error SQ algorithms. We show that the learning algorithms obtained by simulating efficient relative error SQ algorithms both in the absence of noise and in the presence of malicious noise have roughly optimal sample complexity. We also show that the simulation of efficient relative error SQ algorithms in the presence of classification noise yields learning algorithms at least as efficient as those obtained through standard methods, and in some cases improved, roughly optimal results are achieved. The sample complexities for all of these simulations are based on the d(v) metric, which is a type of relative error metric useful for quantities which are small or even zero. We show that uniform convergence with respect to the d(v) metric yields "uniform convergence" with respect to (mu, theta) accuracy. Finally, while we show that many specific learning algorithms can be written as highly efficient relative error SO algorithms, we also show, in fact, that all SO algorithms can be written efficiently by proving general upper bounds on the complexity of (mu, theta) queries as a function of the accuracy parameter epsilon. As a consequence of this result, we give general upper bounds on the complexity of learning algorithms achieved through the use of relative error SO algorithms and the simulations described above. (C) 1998 Academic Press.