Efficient and adaptive Lagrange-multiplier methods for nonlinear continuous global optimization

被引:13
作者
Wah, BW
Wang, T
机构
[1] Univ Illinois, Dept Elect & Comp Engn, Urbana, IL 61801 USA
[2] Univ Illinois, Coordinated Sci Lab, Urbana, IL 61801 USA
基金
美国国家科学基金会;
关键词
adaptive weights; convergence speed; global search; inequality constraints; Lagrange-multiplier method; local search; nonlinear continuous constrained optimization; oscillations; trace-based search method;
D O I
10.1023/A:1008203422124
中图分类号
C93 [管理学]; O22 [运筹学];
学科分类号
070105 ; 12 ; 1201 ; 1202 ; 120202 ;
摘要
Lagrangian methods are popular in solving continuous constrained optimization problems. In this paper, we address three important issues in applying Lagrangian methods to solve optimization problems with inequality constraints. First, we study methods to transform inequality constraints into equality constraints. An existing method, called the slack-variable method, adds a slack variable to each inequality constraint in order to transform it into an equality constraint. Its disadvantage is that when the search trajectory is inside a feasible region, some satisfied constraints may still pose some effect on the Lagrangian function, leading to possible oscillations and divergence when a local minimum lies on the boundary of the feasible region. To overcome this problem, we propose the MaxQ method that carries no effect on satisfied constraints. Hence, minimizing the Lagrangian function in a feasible region always leads to a local minimum of the objective function. We also study some strategies to speed up its convergence. Second, we study methods to improve the convergence speed of Lagrangian methods without affecting the solution quality. This is done by an adaptive-control strategy that dynamically adjusts the relative weights between the objective and the Lagrangian part, leading to better balance between the two and faster convergence. Third, we study a trace-based method to pull the search trajectory from one saddle point to another in a continuous fashion without restarts. This overcomes one of the problems in existing Lagrangian methods that converges only to one saddle point and requires random restarts to look for new saddle points, often missing good saddle points in the vicinity of saddle points already found. Finally, we describe a prototype Novel (Nonlinear Optimization via External Lead) that implements our proposed strategies and present improved solutions in solving a collection of benchmarks.
引用
收藏
页码:1 / 25
页数:25
相关论文
共 36 条
[1]  
[Anonymous], TECHNOMETRICS, DOI DOI 10.2307/1269076]
[2]  
[Anonymous], 1987, LECT NOTES COMPUTER
[3]   GLOBAL MINIMIZATION BY REDUCING THE DUALITY GAP [J].
BENTAL, A ;
EIGER, G ;
GERSHOVITZ, V .
MATHEMATICAL PROGRAMMING, 1994, 63 (02) :193-212
[4]  
CORANA A, 1987, ACM T MATH SOFTWARE, V13, P2
[5]   AN EXTENDED CONTINUOUS NEWTON METHOD [J].
DIENER, I ;
SCHABACK, R .
JOURNAL OF OPTIMIZATION THEORY AND APPLICATIONS, 1990, 67 (01) :57-77
[6]  
EPPERLY T, 1995, EHTEIS U WISCONSIN M
[7]  
FLOUDAD CA, 1992, RECENT ADV GLOBAL OP
[8]  
FLOUDAS CA, 1990, LECT NOTES COMPUT SC, V455, P1
[9]  
FOGEL DB, 1994, IEEE T NEURAL NETWOR, V5, P1
[10]  
Hansen Eldon R., 1992, Global optimization using interval analysis