BAYESIAN INVARIANT MEASUREMENTS OF GENERALIZATION

被引:23
作者
ZHU, HY [1 ]
ROHWER, R [1 ]
机构
[1] UNIV ASTON,DEPT COMP SCI & APPL MATH,BIRMINGHAM B4 7ET,W MIDLANDS,ENGLAND
关键词
D O I
10.1007/BF02309013
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The problem of evaluating different learning rules and other statistical estimators is analysed. A new general theory of statistical inference is developed by combining Bayesian decision theory with information geometry. It is coherent and invariant. For each sample a unique ideal estimate exists and is given by an average over the posterior. An optimal estimate within a model is given by a projection of the ideal estimate. The ideal estimate is a sufficient statistic of the posterior, so practical learning rules are functions of the ideal estimator. If the sole purpose of learning is to extract information from the data, the learning rule must also approximate the ideal estimator. This framework is applicable to both Bayesian and non-Bayesian methods, with arbitrary statistical models, and to supervised, unsupervised and reinforcement learning schemes.
引用
收藏
页码:28 / 31
页数:4
相关论文
共 16 条
[1]  
AKAIKE H, 1980, J ROY STAT SOC B MET, V42, P46
[2]  
AMARI S, 1985, SPRINGER LECTURE NOT, V28
[3]  
BARNDORFFNIELSE.OE, 1986, INT STAT REV, V54, P83
[4]  
DAWID AP, 1973, J R STAT SOC B, V35, P189
[5]  
DeGroot M.H., 2005, OPTIMAL STAT DECISIO
[6]  
Edwards A.W.F., 1972, LIKELIHOOD ACCOUNT S
[7]  
Kass R. E., 1989, STAT SCI, V4, P188
[8]   ON INFORMATION AND SUFFICIENCY [J].
KULLBACK, S ;
LEIBLER, RA .
ANNALS OF MATHEMATICAL STATISTICS, 1951, 22 (01) :79-86
[9]   A PRACTICAL BAYESIAN FRAMEWORK FOR BACKPROPAGATION NETWORKS [J].
MACKAY, DJC .
NEURAL COMPUTATION, 1992, 4 (03) :448-472
[10]   A PRACTICAL BAYESIAN FRAMEWORK FOR BACKPROPAGATION NETWORKS [J].
MACKAY, DJC .
NEURAL COMPUTATION, 1992, 4 (03) :448-472