A UNIVERSAL NEURAL NET WITH GUARANTEED CONVERGENCE TO ZERO SYSTEM ERROR

被引:14
作者
CHANG, TS [1 ]
ABDELGHAFFAR, KAS [1 ]
机构
[1] UNIV CALIF DAVIS,DECIS & CONTROL LAB,DAVIS,CA 95616
关键词
D O I
10.1109/78.175745
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
Despite the general success of learning algorithms for neural nets, such as the back-propagation algorithm, two major issues remain to be solved. First, there is the possibility of being trapped at a local minimum in learning. Second, the convergence rate is typically too slow even if learning can be achieved. In this paper, we primarily deal with the first issue by developing a new learning algorithm with guaranteed convergence to zero system error. The algorithm also has high potential to converge fast. The basic idea is to let the net grow when stopping at a local minimum, in the way that the original local minimum is no longer a local minimum with regard to the new net, and the new net always starts from a point with less error than that in the original local minimum. By this method, the error is guaranteed to decrease until it converges to zero. The technique can also be used to reduce the error to be as small as we want when the back-propagation algorithm reaches a global minimum which does not achieve zero error due to a lack of sufficient number of nodes. When expanding the neural net, the initial weights of the new node can be selected to maximize the error gradient. In case of large error gradient, an abrupt drop in error can be expected in the new net. In this way, fast learning might be achieved. A mathematical proof of the guaranteed learning of the universal neural net is given and numerical examples are used to illustrate its high potential for fast learning.
引用
收藏
页码:3022 / 3031
页数:10
相关论文
共 8 条
[1]  
ALPAYDIN E, 1990, GROW LEARN INCREMENT
[2]   BOUNDS ON THE NUMBER OF HIDDEN NEURONS IN MULTILAYER PERCEPTRONS [J].
HUANG, SC ;
HUANG, YF .
IEEE TRANSACTIONS ON NEURAL NETWORKS, 1991, 2 (01) :47-55
[3]  
JOHANSSON EM, 1990, UCRLJC104850 L LIV N
[4]  
Karnin E D, 1990, IEEE Trans Neural Netw, V1, P239, DOI 10.1109/72.80236
[5]  
Kung S. Y., 1992, DIGITAL NEUROCOMPUTI
[6]  
KUNG SY, 1991, JUL P IJCNN SEATTL
[7]  
Polak E., 1971, COMPUTATIONAL METHOD
[8]  
Rumelhart David E., 1987, LEARNING INTERNAL RE, P318