RUS  ENG
Full version
JOURNALS // Computer Research and Modeling // Archive

Computer Research and Modeling, 2022 Volume 14, Issue 2, Pages 445–472 (Mi crm977)

MATHEMATICAL MODELING AND NUMERICAL SIMULATION

Adaptive first-order methods for relatively strongly convex optimization problems

O. S. Savchuka, A. A. Titovbc, F. S. Stonyakinab, M. S. Alkousabc

a V. I. Vernadsky Crimean Federal University, 4 Academician Vernadsky Avenue, Simferopol, Republic of Crimea, 295007, Russia
b Moscow Institute of Physics and Technology, 9 Institutskiy per., Dolgoprudny, Moscow region, 141701, Russia
c HSE University, 20 Myasnitskaya st., Moscow, 101000, Russia

Abstract: The article is devoted to first-order adaptive methods for optimization problems with relatively strongly convex functionals. The concept of relatively strong convexity significantly extends the classical concept of convexity by replacing the Euclidean norm in the definition by the distance in a more general sense (more precisely, by Bregman's divergence). An important feature of the considered classes of problems is the reduced requirements concerting the level of smoothness of objective functionals. More precisely, we consider relatively smooth and relatively Lipschitz-continuous objective functionals, which allows us to apply the proposed techniques for solving many applied problems, such as the intersection of the ellipsoids problem (IEP), the Support Vector Machine (SVM) for a binary classification problem, etc. If the objective functional is convex, the condition of relatively strong convexity can be satisfied using the problem regularization. In this work, we propose adaptive gradient-type methods for optimization problems with relatively strongly convex and relatively Lipschitz-continuous functionals for the first time. Further, we propose universal methods for relatively strongly convex optimization problems. This technique is based on introducing an artificial inaccuracy into the optimization model, so the proposed methods can be applied both to the case of relatively smooth and relatively Lipschitz-continuous functionals. Additionally, we demonstrate the optimality of the proposed universal gradient-type methods up to the multiplication by a constant for both classes of relatively strongly convex problems. Also, we show how to apply the technique of restarts of the mirror descent algorithm to solve relatively Lipschitz-continuous optimization problems. Moreover, we prove the optimal estimate of the rate of convergence of such a technique. Also, we present the results of numerical experiments to compare the performance of the proposed methods.

Keywords: adaptive method, relatively strongly convex functional, relatively smooth functional, relatively Lipschitz-continuous functional, optimal method, mirror descent.

UDC: 519.8

Received: 12.02.2022
Accepted: 13.02.2022

Language: English

DOI: 10.20537/2076-7633-2022-14-2-445-472



© Steklov Math. Inst. of RAS, 2024