Stochastic complexities of reduced rank regression in Bayesian estimation

被引:51
作者
Aoyagi, M
Watanabe, S
机构
[1] Sophia Univ, Dept Math, Chiyoda Ku, Tokyo 1028554, Japan
[2] Tokyo Inst Technol, Precis & Intelligence Lab, Midori Ku, Yokohama, Kanagawa 2268503, Japan
关键词
Stochastic complexity; generalization error; reduced rank regression models; non-regular learning machines; Bayesian estimate; resolution of singularities; Kullback information; zeta function;
D O I
10.1016/j.neunet.2005.03.014
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Reduced rank regression extracts an essential information from examples of input-output pairs. It is understood as a three-layer neural network with linear hidden units. However, reduced rank approximation is a non-regular statistical model which has a degenerate Fisher information matrix. Its generalization error had been left unknown even in statistics. In this paper, we give the exact asymptotic form of its generalization error in Bayesian estimation, based on resolution of learning machine singularities. For this purpose, the maximum pole of the zeta function for the learning theory is calculated. We propose a new method of recursive blowing-ups which yields the complete desingularization of the reduced rank approximation. (c) 2005 Elsevier Ltd. All rights reserved.
引用
收藏
页码:924 / 933
页数:10
相关论文
共 16 条