RUS  ENG
Full version
JOURNALS // Trudy Instituta Matematiki i Mekhaniki UrO RAN // Archive

Trudy Inst. Mat. i Mekh. UrO RAN, 2019 Volume 25, Number 4, Pages 210–225 (Mi timm1687)

This article is cited in 1 paper

Adaptation to inexactness for some gradient-type optimization methods

F. S. Stonyakin

Crimea Federal University, Simferopol

Abstract: We introduce a notion of inexact model of a convex objective function, which allows for errors both in the function and in its gradient. For this situation, a gradient method with an adaptive adjustment of some parameters of the model is proposed and an estimate for the convergence rate is found. This estimate is optimal on a class of sufficiently smooth problems in the presence of errors. We consider a special class of convex nonsmooth optimization problems. In order to apply the proposed technique to this class, an artificial error should be introduced. We show that the method can be modified for such problems to guarantee a convergence in the function with a nearly optimal rate on the class of convex nonsmooth optimization problems. An adaptive gradient method is proposed for objective functions with some relaxation of the Lipschitz condition for the gradient that satisfy the Polyak–Lojasievicz gradient dominance condition. Here, the objective function and its gradient can be given inexactly. The adaptive choice of the parameters is performed during the operation of the method with respect to both the Lipschitz constant of the gradient and a value corresponding to the error of the gradient and the objective function. The linear convergence of the method is justified up to a value associated with the errors.

Keywords: gradient method, adaptive method, Lipschitz gradient, nonsmooth optimization, gradient dominance condition.

UDC: 519.85

Received: 08.09.2019
Revised: 21.10.2019
Accepted: 28.10.2019

DOI: 10.21538/0134-4889-2019-25-4-210-225



Bibliographic databases:


© Steklov Math. Inst. of RAS, 2024