Smoothing methods for convex inequalities and linear complementarity problems

被引:245
作者
Chen, CH [1 ]
Mangasarian, OL [1 ]
机构
[1] UNIV WISCONSIN,DEPT COMP SCI,MADISON,WI 53706
基金
美国国家科学基金会;
关键词
smoothing; convex inequalities; linear complementarity;
D O I
10.1007/BF01592244
中图分类号
TP31 [计算机软件];
学科分类号
081202 ; 0835 ;
摘要
A smooth approximation p(x, alpha) to the plus function max{x, 0} is obtained by integrating the sigmoid function 1/(1+e-(alpha x)), commonly used in neural networks. By means of this approximation, linear and convex inequalities are converted into smooth, convex unconstrained minimization problems, the solution of which approximates the solution of the original problem to a high degree of accuracy for alpha sufficiently large. In the special case when a Slater constraint qualification is satisfied, an exact solution can be obtained for finite alpha. Speedup over MINOS 5.4 was as high as 1142 times for linear inequalities of size 2000 x 1000, and 580 times for convex inequalities with 400 variables. Linear complementarity problems are converted into a system of smooth nonlinear equations and are solved by a quadratically convergent Newton method. For monotone LCPs with as many as 10 000 variables, the proposed approach was as much as 63 times faster than Lemke's method.
引用
收藏
页码:51 / 69
页数:19
相关论文
共 20 条