LEARNING IN LINEAR NEURAL NETWORKS - A SURVEY

被引:138
作者
BALDI, PF
HORNIK, K
机构
[1] CALTECH,DIV BIOL,PASADENA,CA 91109
[2] VIENNA TECH UNIV,INST STAT & WAHRSCHEINLICHKEITSTHEORIE,A-1040 VIENNA,AUSTRIA
来源
IEEE TRANSACTIONS ON NEURAL NETWORKS | 1995年 / 6卷 / 04期
基金
美国国家科学基金会;
关键词
D O I
10.1109/72.392248
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Networks of linear units are the simplest kind of networks, where the basic questions related to learning, generalization, and self-organization can sometimes be answered analytically, We survey most of the known results on linear networks, including: 1) backpropagation learning and the structure of the error function landscape, 2) the temporal evolution of generalization, and 3) unsupervised learning algorithms and their properties, The connections to classical statistical ideas, such as principal component analysis (PCA), are emphasized as well as several simple but challenging open questions, A few new results are also spread across the paper, including an analysis of the effect of noise on backpropagation networks and a unified view of all unsupervised algorithms.
引用
收藏
页码:837 / 858
页数:22
相关论文
共 52 条