An efficient gradient forecasting search method utilizing the discrete difference equation prediction model

被引:24
作者
Chen C.-M. [1 ]
Lee H.-M. [1 ]
机构
[1] Department of Electronic Engineering, Natl. Taiwan Univ. of Sci./Technol., Taipei 106, 43 Sec. 4, Keelung Rd.
关键词
Discrete Difference Equation Prediction Model; Gradient descent method; Gradient Forecasting Search Method; Optimization method;
D O I
10.1023/A:1012817410590
中图分类号
学科分类号
摘要
Optimization theory and method profoundly impact numerous engineering designs and applications. The gradient descent method is simpler and more extensively used to solve numerous optimization problems than other search methods. However, the gradient descent method is easily trapped into a local minimum and slowly converges. This work presents a Gradient Forecasting Search Method (GFSM) for enhancing the performance of the gradient descent method in order to resolve optimization problems. GFSM is based on the gradient descent method and on the universal Discrete Difference Equation Prediction Model (DDEPM) proposed herein. In addition, the concept of the universal DDEPM is derived from the grey prediction model. The original grey prediction model uses a mathematical hypothesis and approximation to transform a continuous differential equation into a discrete difference equation. This is not a logical approach because the forecasting sequence data is invariably discrete. To construct a more precise prediction model, this work adopts a discrete difference equation. GFSM proposed herein can accurately predict the precise searching direction and trend of the gradient descent method via the universal DDEPM and can adjust prediction steps dynamically using the golden section search algorithm. Experimental results indicate that the proposed method can accelerate the searching speed of gradient descent method as well as help the gradient descent method escape from local minima. Our results further demonstrate that applying the golden section search method to achieve dynamic prediction steps of the DDEPM is an efficient approach for this search algorithm.
引用
收藏
页码:43 / 58
页数:15
相关论文
共 27 条
[1]  
Reklaitis G.V., Ravindran A., Ragsdell K.M., Engineering Optimization Methods and Applications, (1983)
[2]  
Gill P.E., Murray W., Wright M.H., Practical Optimization, Harcourt Brace Jovanovich, (1981)
[3]  
Chen M.-S., Control of linear time-varying systems by the gradient algorithm, IEEE Conference on Decision & Control, 5, pp. 4549-4553, (1997)
[4]  
So S.-H., Park D.J., Design of gradient descent based self-organizing fuzzy logic controller with dual outputs, IEEE International Conference on Fuzzy Systems, 1, pp. 460-464, (1999)
[5]  
Shi Y., Mizumoto M., Yubazaki N., Otani M., A learning algorithm for tuning fuzzy rules based on the gradient descent method, IEEE International Conference on Neural Networks, 1, pp. 55-61, (1996)
[6]  
Park D.C., Dagher I., Gradient based fuzzy C-means (GBFCM) algorithm, IEEE International Conference on Fuzzy Systems, 3, pp. 1626-1631, (1994)
[7]  
Rumelhart D.E., Hiton G.E., Williams R.J., Learning internal representation by error propagation, Parallel Distributed Processing, 1, pp. 318-362, (1986)
[8]  
Charalambous C., Conjugate gradient algorithm for efficient training of artificial neural networks, IEEE Proceedings-G, 139, 3, pp. 301-310, (1992)
[9]  
Amari S., Douglas S.C., Why natural gradient?, Proceedings of IEEE International Conference Acoust., Speech, Signal Processing, pp. 1213-1216, (1998)
[10]  
Amari S.-I., Natural gradient works efficiently in learning, Neural Computation, 10, pp. 251-276, (1998)