A constructive design method for two-layer perceptrons and its application to the design of modular neural networks

被引:2
作者
Moon, YJ
Oh, SY
机构
[1] Department of Electrical Engineering, Pohang Univ. of Sci. and Technology, Pohang
[2] Pusan University, Pusan
[3] Case Western Reserve University, Cleveland, OH
[4] Korea Atom. Ener. Research Institute, Seoul
[5] Dept. of Elec. Eng. and Comp. Sci., University of Illinois, Chicago, IL
[6] Department of Electrical Engineering, University of Florida, Gainesville, FL
[7] Department of Electrical Engineering, Pohang Univ. of Sci. and Technology, Pohang
关键词
constructive design method (CDM); modular neural network (MNN); feature vector;
D O I
10.1111/j.1468-0394.1996.tb00118.x
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
A multilayer perceptron is known to be capable of approximating any smooth function to any desired accuracy if it has a sufficient number of hidden neurons. But its training, based on the gradient method, is usually a time consuming procedure that may converge toward a local minimum, and furthermore its performance is greatly influenced by the number of hidden neurons and their initial weights. Usually these crucial parameters are determined based on the trial and error procedure, requiring much experience on the designer's part. In this paper, a constructive design method (CDM) has been propose for a two-layer perceptron that can approximate a class of smooth functions whose feature vector classes are linearly separable. Based on the analysis of a given data set sampled from the target function, feature vectors that can characterize the function 'well' are extracted and used to determine the number of hidden neurons and the initial weights of the network. But when the classes of the feature vectors are not linearly separable, the network may not be trained easily, mainly due to the interference among the hyperplanes generated by hidden neurons. Next, to compensate for this interference, a refined version of the modular neural network (MNN) has been proposed where each network module is created by CDM. After the input space has been partitioned into many local regions, a two-layer perceptron constructed by CDM is assigned to each local region. By doing this, the feature vector classes are more likely to become linearly separable in each local region and as a result, the function may be approximated with greatly improved accuracy by MNN. An example simulation illustrates the improvements in learning speed using a smaller number of neurons.
引用
收藏
页码:183 / 194
页数:12
相关论文
共 15 条
[1]  
CARPENTER G, 1987, 1ST P IEEE INT C NEU, V2, P727
[2]  
DUDA RO, 1973, PATTERN CLASSIFICATI, P237
[3]  
Fahlman S., 1990, ADV NEURAL INFORMATI, V2, P524
[4]  
FRANK B, 1992, NEURAL NETWORKS, V5, P139
[5]   ON THE APPROXIMATE REALIZATION OF CONTINUOUS-MAPPINGS BY NEURAL NETWORKS [J].
FUNAHASHI, K .
NEURAL NETWORKS, 1989, 2 (03) :183-192
[6]   RECOGNITION OF MANIPULATED OBJECTS BY MOTOR LEARNING WITH MODULAR ARCHITECTURE NETWORKS [J].
GOMI, H ;
KAWATO, M .
NEURAL NETWORKS, 1993, 6 (04) :485-497
[7]   MULTILAYER FEEDFORWARD NETWORKS ARE UNIVERSAL APPROXIMATORS [J].
HORNIK, K ;
STINCHCOMBE, M ;
WHITE, H .
NEURAL NETWORKS, 1989, 2 (05) :359-366
[8]   LEARNING PIECEWISE CONTROL STRATEGIES IN A MODULAR NEURAL-NETWORK ARCHITECTURE [J].
JACOBS, RA ;
JORDAN, MI .
IEEE TRANSACTIONS ON SYSTEMS MAN AND CYBERNETICS, 1993, 23 (02) :337-345
[9]  
KIM HG, 1994, P IEEE INT C ROB AUT, V4, P3174
[10]  
KITA H, 1991, P IJCNN SINGAPORE, V3, P2080