PRINCIPAL COMPONENT EXTRACTION USING RECURSIVE LEAST-SQUARES LEARNING

被引:78
作者
BANNOUR, S [1 ]
AZIMISADJADI, MR [1 ]
机构
[1] COLORADO STATE UNIV,DEPT ELECT ENGN,FT COLLINS,CO 80523
来源
IEEE TRANSACTIONS ON NEURAL NETWORKS | 1995年 / 6卷 / 02期
关键词
D O I
10.1109/72.363480
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
A new neural network-based approach is introduced for recursive computation of the principal components of a stationary vector stochastic process. The neurons of a single layer network are sequentially trained using a recursive least squares squares (RLS) type algorithm to extract the principal components of the input process. The optimality criterion is based on retaining the maximum information contained in the input sequence so as to be able to reconstruct the network inputs from the corresponding outputs with minimum mean squared error. The proof of the convergence of the weight vectors to the principal eigenvectors is also established. A simulation example is given to show the accuracy and speed advantages of this algorithm in comparison with the existing methods. Finally, the application of this learning algorithm to image data reduction and filtering of images degraded by additive and/or multiplicative noise is considered.
引用
收藏
页码:457 / 469
页数:13
相关论文
共 18 条