FPE-based criteria to dimension feedforward neural topologies

被引:6
作者
Alippi, C [1 ]
机构
[1] Politecn Milan, Dipartimento Elettron & Informaz, I-20133 Milan, Italy
来源
IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS I-FUNDAMENTAL THEORY AND APPLICATIONS | 1999年 / 46卷 / 08期
关键词
FPE; learning from samples; model selection; neural networks;
D O I
10.1109/81.780377
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
This paper deals with the problem of dimensioning a feedforward neural network to learn an unknown function from input/output pairs. The ultimate goal is to tune the complexity of the neural model with the information present in the training set and to estimate its performance without needing new data for cross-validation. For generality, it is not assumed that the unknown function belongs to the family of neural models, A generalization of the final prediction error to biased models is provided, which can be applied to learn unknown functions both in noise free and noise affected applications, This is based on a new definition of the effective number of parameters used by the neural model to fit the data. New criteria for model selection are introduced and compared with the generalized prediction error and the network information criteria.
引用
收藏
页码:962 / 973
页数:12
相关论文
共 32 条