RKHS-based functional analysis for exact incremental learning

被引:19
作者
Vijayakumar, S [1 ]
Ogawa, H
机构
[1] RIKEN, Inst Phys & Chem Res, Brain Sci Inst, Wako, Saitama 3510198, Japan
[2] Tokyo Inst Technol, Dept Comp Sci, Meguro Ku, Tokyo 152, Japan
关键词
reproducing kernel Hilbert space (RKHS); functional analysis; incremental learning; generalization; model selection;
D O I
10.1016/S0925-2312(99)00112-5
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
We investigate the problem of incremental learning in artificial neural networks by viewing it as a sequential function approximation problem. A framework for discussing the generalization ability of a trained network in the original function space using tools of functional analysis based on reproducing kernel Hilbert spaces (RKHS) is introduced. Using this framework, we devise a method of carrying out optimal incremental learning with respect to the entire set of training data by employing the results derived at the previous stage of learning and incorporating the newly available training data effectively. Most importantly, the incrementally learned function has the same (optimal) generalization ability as would have been achieved by using batch learning on the entire set of training data, hence, referred to as exact learning. This ensures that both the learning operator and the learned function can be computed using an online incremental scheme. Finally, we also provide a simplified closed-form relationship between the learned functions before and after the incorporation of new data for various optimization criteria, opening avenues for work into selection of optimal training set. We also show that learning under this kind of framework is inherently well suited for applying novel model selection strategies and introducing bias and a priori knowledge in a more systematic way. Moreover, it provides a useful hint in performing kernel-based approximations, of which the regularization and SVM networks are special cases, in an online setting. (C) 1999 Elsevier Science B.V. All rights reserved.
引用
收藏
页码:85 / 113
页数:29
相关论文
共 30 条
[1]  
ALBERT A, 1972, REGRESSION MOOREPENR
[2]  
[Anonymous], NEURAL COMPUTATION
[3]  
[Anonymous], 1987, P CBMS NSF REG C SER
[4]   THEORY OF REPRODUCING KERNELS [J].
ARONSZAJN, N .
TRANSACTIONS OF THE AMERICAN MATHEMATICAL SOCIETY, 1950, 68 (MAY) :337-404
[5]  
DUNKIN N, 1997, 8 IT WORKSH NEUR NET, P112
[6]   An equivalence between sparse approximation and support vector machines [J].
Girosi, F .
NEURAL COMPUTATION, 1998, 10 (06) :1455-1480
[7]   A NEW SCHEME FOR INCREMENTAL LEARNING [J].
JUTTEN, C ;
CHENTOUF, R .
NEURAL PROCESSING LETTERS, 1995, 2 (01) :1-4
[8]   A FUNCTION ESTIMATION APPROACH TO SEQUENTIAL LEARNING WITH NEURAL NETWORKS [J].
KADIRKAMANATHAN, V ;
NIRANJAN, M .
NEURAL COMPUTATION, 1993, 5 (06) :954-975
[9]  
KIMELDORF GS, 1971, ANN MATH STAT, V2, P495
[10]   THE USE OF TRANS-INFORMATION IN THE DESIGN OF DATA SAMPLING SCHEMES FOR INVERSE PROBLEMS [J].
LUTTRELL, SP .
INVERSE PROBLEMS, 1985, 1 (03) :199-218