Adaptive cubic regularisation methods for unconstrained optimization. Part II: worst-case function- and derivative-evaluation complexity

被引:223
作者
Cartis, Coralia [1 ,2 ]
Gould, Nicholas I. M. [2 ,3 ]
Toint, Philippe L. [4 ]
机构
[1] Univ Edinburgh, Sch Math, Edinburgh EH9 3JZ, Midlothian, Scotland
[2] Rutherford Appleton Lab, Computat Sci & Engn Dept, Chilton OX11 0QX, Oxon, England
[3] Univ Oxford, Comp Lab, Numer Anal Grp, Oxford OX1 3QD, England
[4] FUNDP Univ Namur, Dept Math, B-5000 Namur, Belgium
基金
英国工程与自然科学研究理事会;
关键词
Nonlinear optimization; Unconstrained optimization; Cubic regularization; Newton's method; Trust-region methods; Global complexity bounds; Global rate of convergence; NONLINEAR OPTIMIZATION; NEWTON METHODS;
D O I
10.1007/s10107-009-0337-y
中图分类号
TP31 [计算机软件];
学科分类号
081205 [计算机软件];
摘要
An Adaptive Regularisation framework using Cubics (ARC) was proposed for unconstrained optimization and analysed in Cartis, Gould and Toint (Part I, Math Program, doi:10.1007/s10107-009-0286-5, 2009), generalizing at the same time an unpublished method due to Griewank (Technical Report NA/12, 1981, DAMTP, University of Cambridge), an algorithm by Nesterov and Polyak (Math Program 108(1):177-205, 2006) and a proposal by Weiser, Deuflhard and Erdmann (Optim Methods Softw 22(3):413-431, 2007). In this companion paper, we further the analysis by providing worst-case global iteration complexity bounds for ARC and a second-order variant to achieve approximate first-order, and for the latter second-order, criticality of the iterates. In particular, the second-order ARC algorithm requires at most O(epsilon(-3/2)) iterations, or equivalently, function- and gradient-evaluations, to drive the norm of the gradient of the objective below the desired accuracy epsilon and O(epsilon(-3)) iterations, to reach approximate nonnegative curvature in a subspace. The orders of these bounds match those proved for Algorithm 3.3 of Nesterov and Polyak which minimizes the cubic model globally on each iteration. Our approach is more general in that it allows the cubic model to be solved only approximately and may employ approximate Hessians.
引用
收藏
页码:295 / 319
页数:25
相关论文
共 13 条
[1]
[Anonymous], 2013, Introductory lectures on convex optimization: A basic course
[2]
[Anonymous], 1999, SPRINGER SCI
[3]
[Anonymous], TRUST REGION METHODS, DOI DOI 10.1137/1.9780898719857
[4]
Adaptive cubic regularisation methods for unconstrained optimization. Part I: motivation, convergence and numerical results [J].
Cartis, Coralia ;
Gould, Nicholas I. M. ;
Toint, Philippe L. .
MATHEMATICAL PROGRAMMING, 2011, 127 (02) :245-295
[5]
DENNIS J. E., 1996, Numerical Methods for Unconstrained Optimization and Nonlinear Equations
[6]
DENNIS JE, 1974, MATH COMPUT, V28, P549, DOI 10.1090/S0025-5718-1974-0343581-1
[7]
Golub G. H., 1996, MATRIX COMPUTATIONS
[8]
A recursive l∞-trust-region method for bound-constrained nonlinear optimization [J].
Gratton, Serge ;
Mouffe, Melodie ;
Toint, Philippe L. ;
Weber-Mendonca, Melissa .
IMA JOURNAL OF NUMERICAL ANALYSIS, 2008, 28 (04) :827-861
[9]
Recursive trust-region methods for multiscale nonlinear optimization [J].
Gratton, Serge ;
Sartenaer, Annick ;
Toint, Philippe L. .
SIAM JOURNAL ON OPTIMIZATION, 2008, 19 (01) :414-444
[10]
Griewank A., 1981, Technical Report NA/12