Incremental subgradient methods for nondifferentiable optimization

被引:422
作者
Nedic, A [1 ]
Bertsekas, DP [1 ]
机构
[1] MIT, Dept Elect Engn & Comp Sci, Cambridge, MA 02139 USA
关键词
nondifferentiable optimization; convex programming; incremental subgradient methods; stochastic subgradient methods;
D O I
10.1137/S1052623499362111
中图分类号
O29 [应用数学];
学科分类号
070104 ;
摘要
We consider a class of subgradient methods for minimizing a convex function that consists of the sum of a large number of component functions. This type of minimization arises in a dual context from Lagrangian relaxation of the coupling constraints of large scale separable problems. The idea is to perform the subgradient iteration incrementally, by sequentially taking steps along the subgradients of the component functions, with intermediate adjustment of the variables after processing each component function. This incremental approach has been very successful in solving large differentiable least squares problems, such as those arising in the training of neural networks, and it has resulted in a much better practical rate of convergence than the steepest descent method. In this paper, we establish the convergence properties of a number of variants of incremental subgradient methods, including some that are stochastic. Based on the analysis and computational experiments, the methods appear very promising and effective for important classes of large problems. A particularly interesting discovery is that by randomizing the order of selection of component functions for iteration, the convergence rate is substantially improved.
引用
收藏
页码:109 / 138
页数:30
相关论文
共 40 条
[1]  
[Anonymous], 1985, NONDIFFERENTIABLE OP
[2]  
[Anonymous], 1990, KNAPSACK PROBLEMS
[3]  
BENTAL A, IN PRESS STUD COMPUT
[4]  
Bertsekas D. P., 1999, NONLINEAR PROGRAMMIN, V2nd
[5]  
Bertsekas D.P., 1998, NETWORK OPTIMIZATION
[6]  
Bertsekas D. P., 1996, Neuro Dynamic Programming, V1st
[7]   A new class of incremental gradient methods for least squares problems [J].
Bertsekas, DP .
SIAM JOURNAL ON OPTIMIZATION, 1997, 7 (04) :913-926
[8]   Gradient convergence in gradient methods with errors [J].
Bertsekas, DP ;
Tsitsiklis, JN .
SIAM JOURNAL ON OPTIMIZATION, 2000, 10 (03) :627-642
[9]  
BRANNLUND U, 1993, THESIS ROYAL I TECHN
[10]   CONVERGENCE OF SOME ALGORITHMS FOR CONVEX MINIMIZATION [J].
CORREA, R ;
LEMARECHAL, C .
MATHEMATICAL PROGRAMMING, 1993, 62 (02) :261-275