Volterra models and three-layer perceptrons

被引:58
作者
Marmarelis, VZ [1 ]
Zhao, X [1 ]
机构
[1] UNIV SO CALIF, DEPT ELECT ENGN, LOS ANGELES, CA 90089 USA
来源
IEEE TRANSACTIONS ON NEURAL NETWORKS | 1997年 / 8卷 / 06期
基金
美国国家卫生研究院;
关键词
Laguerre kernel expansion; nonlinear system modeling; polynomial activation functions; separable Volterra network; three-layer perceptrons; Volterra kernels; Volterra models;
D O I
10.1109/72.641465
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
This paper proposes the use of a class of feedforward artificial neural networks with polynomial activation functions (distinct for each hidden unit) for practical modeling of high-order Volterra systems, Discrete-time Volterra models (DVM's) are often used in the study of nonlinear physical and physiological systems using stimulus-response data, However, their practical use has been hindered by computational limitations that confine them to low-order nonlinearities (i.e., only estimation of low-order kernels is practically feasible), Since three-layer perceptrons (TLP's) can be used to represent input-output nonlinear mappings of arbitrary order, this paper explores the basic relations between DVM and TLP with tapped-delay inputs in the context of nonlinear system modeling, A variant of TLP with polynomial activation functions-termed ''separable Volterra networks'' (SVN's)-is found particularly useful in deriving explicit relations with DVM and in obtaining practicable models of highly nonlinear systems from stimulus-response data, The conditions under which the two approaches yield equivalent representations of the input-output relation are explored, and the feasibility of DVM estimation via equivalent SVN training using backpropagation is demonstrated by computer-simulated examples and compared with results from the Laguerre expansion technique (LET), The use of SVN models allows practicable modeling of high-order nonlinear systems, thus removing the main practical limitation of the DVM approach.
引用
收藏
页码:1421 / 1433
页数:13
相关论文
共 41 条
[1]  
[Anonymous], 1978, ANAL PHYSL SYSTEMS
[2]  
[Anonymous], 1963, Amer. Math. Soc. Trans, DOI [10.1090/trans2/028/04, DOI 10.1090/TRANS2/028/04]
[3]   APPROXIMATION-THEORY AND FEEDFORWARD NETWORKS [J].
BLUM, EK ;
LI, LK .
NEURAL NETWORKS, 1991, 4 (04) :511-515
[4]   CONVENTIONAL MODELING OF THE MULTILAYER PERCEPTRON USING POLYNOMIAL BASIS FUNCTIONS [J].
CHEN, MS ;
MANRY, MT .
IEEE TRANSACTIONS ON NEURAL NETWORKS, 1993, 4 (01) :164-166
[5]   ADAPTIVE CHANNEL EQUALIZATION USING A POLYNOMIAL-PERCEPTRON STRUCTURE [J].
CHEN, S ;
GIBSON, GJ ;
COWAN, CFN .
IEE PROCEEDINGS-I COMMUNICATIONS SPEECH AND VISION, 1990, 137 (05) :257-264
[6]   UNIVERSAL APPROXIMATION TO NONLINEAR OPERATORS BY NEURAL NETWORKS WITH ARBITRARY ACTIVATION FUNCTIONS AND ITS APPLICATION TO DYNAMICAL-SYSTEMS [J].
CHEN, TP ;
CHEN, H .
IEEE TRANSACTIONS ON NEURAL NETWORKS, 1995, 6 (04) :911-917
[7]  
Cybenko G., 1989, Mathematics of Control, Signals, and Systems, V2, P303, DOI 10.1007/BF02551274
[8]  
DAVID GW, 1991, P IJCNN 91 SEATTLE W, P727
[9]   A GENERALIZED FOCK SPACE FRAMEWORK FOR NON-LINEAR SYSTEM AND SIGNAL ANALYSIS [J].
DEFIGUEIREDO, RJP .
IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS, 1983, 30 (09) :637-647
[10]   IMPLICATIONS AND APPLICATIONS OF KOLMOGOROV SUPERPOSITION THEOREM [J].
DEFIGUEIREDO, RJP .
IEEE TRANSACTIONS ON AUTOMATIC CONTROL, 1980, 25 (06) :1227-1231