BAYES FACTORS

被引:12288
作者
KASS, RE [1 ]
RAFTERY, AE [1 ]
机构
[1] UNIV WASHINGTON, DEPT STAT, SEATTLE, WA 98195 USA
基金
美国国家科学基金会; 美国国家卫生研究院;
关键词
BAYESIAN HYPOTHESIS TESTS; BIG; IMPORTANCE SAMPLING; LAPLACE METHOD; MARKOV CHAIN MONTE CARLO; MODEL SELECTION; MONTE CARLO INTEGRATION; POSTERIOR MODEL PROBABILITIES; POSTERIOR ODDS; QUADRATURE; SCHWARZ CRITERION; SENSITIVITY ANALYSIS; STRENGTH OF EVIDENCE;
D O I
10.1080/01621459.1995.10476572
中图分类号
O21 [概率论与数理统计]; C8 [统计学];
学科分类号
020208 ; 070103 ; 0714 ;
摘要
In a 1935 paper and in his book Theory of Probability, Jeffreys developed a methodology for quantifying the evidence in favor of a scientific theory. The centerpiece was a number, now called the Bayes factor, which is the posterior odds of the null hypothesis when the prior probability on the null is one-half. Although there has been much discussion of Bayesian hypothesis testing in the context of criticism of P-values, less attention has been given to the Bayes factor as a practical tool of applied statistics. In this article we review and discuss the uses of Bayes factors in the context of five scientific applications in genetics, sports, ecology, sociology, and psychology. We emphasize the following points: From Jeffreys' Bayesian viewpoint, the purpose of hypothesis testing is to evaluate the evidence in favor of a scientific theory. Bayes factors offer a way of evaluating evidence in favor of a null hypothesis. Bayes factors provide a way of incorporating external information into the evaluation of evidence about a hypothesis. Bayes factors are very general and do not require alternative models to be nested. Several techniques are available for computing Bayes factors, including asymptotic approximations that are easy to compute using the output from standard packages that maximize likelihoods. In ''nonstandard'' statistical models that do not satisfy common regularity conditions, it can be technically simpler to calculate Bayes factor; than to derive non-Bayesian significance tests. The Schwarz criterion (or BIG) gives a rough approximation to the logarithm of the Bayes factor, which is easy to use and does not require evaluation of prior distributions. When one is interested in estimation or prediction, Bayes factors may be converted to weights to be attached to various models so that a composite estimate or prediction may be obtained that takes account of structural or model uncertainty. Algorithms have been proposed that allow model uncertainty to be taken into account when the class of models initially considered is very large. Bayes factors are useful for guiding an evolutionary model-building process. It is important, and feasible, to assess the sensitivity of conclusions to the prior distributions used.
引用
收藏
页码:773 / 795
页数:23
相关论文
共 178 条
  • [1] AITCHISON J, 1975, STATISTICAL PREDICTI
  • [2] AITKIN M, 1991, J ROY STAT SOC B MET, V53, P111
  • [3] Akaike H, 1973, B INT STAT I, V50, P277
  • [4] Akaike H., 1977, APPL STAT, V543, P27
  • [5] Akaike H., 1992, 2 INT S INF THEOR, P267, DOI DOI 10.1007/978-1-4612-1694-0_15
  • [6] AKMAN VE, 1986, J ROY STAT SOC B MET, V48, P322
  • [7] ASYMPTOTIC INFERENCE FOR A CHANGE-POINT POISSON-PROCESS
    AKMAN, VE
    RAFTERY, AE
    [J]. ANNALS OF STATISTICS, 1986, 14 (04) : 1583 - 1590
  • [8] ALBERT JH, 1990, CANADIAN J STATISTIC, V4, P347
  • [9] [Anonymous], 1992, BAYESIAN STAT
  • [10] [Anonymous], 1989, STAT SCI, DOI DOI 10.1214/SS/1177012400