Learning Coefficient of Generalization Error in Bayesian Estimation and Vandermonde Matrix-Type Singularity

被引:11
作者
Aoyagi, Miki [1 ]
Nagata, Kenji [2 ]
机构
[1] Nihon Univ, Coll Sci & Technol, Dept Math, Chiyoda Ku, Kanda 1018308, Japan
[2] Univ Tokyo, Grad Sch Frontier Sci, Kashiwa, Chiba 2778561, Japan
关键词
MONTE-CARLO METHOD; STOCHASTIC COMPLEXITY; NEURAL NETWORKS; MODEL; INFORMATION; MACHINES; CRITERION;
D O I
10.1162/NECO_a_00271
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The term algebraic statistics arises from the study of probabilistic models and techniques for statistical inference using methods from algebra and geometry (Sturmfels, 2009). The purpose of our study is to consider the generalization error and stochastic complexity in learning theory by using the log-canonical threshold in algebraic geometry. Such thresholds correspond to the main term of the generalization error in Bayesian estimation, which is called a learning coefficient (Watanabe, 2001a, 2001b). The learning coefficient serves to measure the learning efficiencies in hierarchical learning models. In this letter, we consider learning coefficients for Vandermonde matrix-type singularities, by using a new approach: focusing on the generators of the ideal, which defines singularities. We give tight new bound values of learning coefficients for the Vandermonde matrix-type singularities and the explicit values with certain conditions. By applying our results, we can show the learning coefficients of three-layered neural networks and normal mixture models.
引用
收藏
页码:1569 / 1610
页数:42
相关论文
共 43 条