RUS  ENG
Full version
JOURNALS // Computer Research and Modeling // Archive

Computer Research and Modeling, 2022 Volume 14, Issue 2, Pages 473–495 (Mi crm978)

This article is cited in 1 paper

MATHEMATICAL MODELING AND NUMERICAL SIMULATION

Subgradient methods for non-smooth optimization problems with some relaxation of sharp minimum

S. S. Ablaevab, D. V. Makarenkoa, F. S. Stonyakinab, M. S. Alkousaac, I. V. Baranb

a Moscow Institute of Physics and Technology, 9 Institutskiy per., Dolgoprudny, Moscow region, 141701, Russia
b V. I. Vernadsky Crimean Federal University, 4 Academician Vernadsky Avenue, Simferopol, Republic of Crimea, 295007, Russia
c HSE University, 20 Myasnitskaya st., Moscow, 101000, Russia

Abstract: Non-smooth optimization often arises in many applied problems. The issues of developing efficient computational procedures for such problems in high-dimensional spaces are very topical. First-order methods (subgradient methods) are well applicable here, but in fairly general situations they lead to low speed guarantees for large-scale problems. One of the approaches to this type of problem can be to identify a subclass of non-smooth problems that allow relatively optimistic results on the rate of convergence. For example, one of the options for additional assumptions can be the condition of a sharp minimum, proposed in the late 1960s by B. T. Polyak. In the case of the availability of information about the minimal value of the function for Lipschitz-continuous problems with a sharp minimum, it turned out to be possible to propose a subgradient method with a Polyak step-size, which guarantees a linear rate of convergence in the argument. This approach made it possible to cover a number of important applied problems (for example, the problem of projecting on to a convex compact set). However, both the condition of the availability of the minimal value of the function and the condition of a sharp minimum itself look rather restrictive. In this regard, in this paper, we propose a generalized condition for a sharp minimum, some what similar to the inexact oracle proposed recently by Devolder–Glineur—Nesterov. The proposed approach makes it possible to extend the class of applicability of subgradient methods with the Polyak step-size, to the situation of inexact information about the value of the minimum, as well as the unknown Lipschitz constant of the objective function. Moreover,the use of local analogs of the global characteristics of the objective function makes it possible to apply the results of this type to wider classes of problems. We show the possibility of applying the proposed approach to strongly convex non-smooth problems, also, we make an experimental comparison with the known optimal subgradient method for such a class of problems. Moreover, there were obtained some results connected to the applicability of the proposed technique to some types of problems with convexity relaxations: the recently proposed notion of weak $\beta$-quasi-convexity and ordinary quasi-convexity. Also in the paper, we study a generalization of the described technique to the situation with the assumption that the $\delta$-subgradient of the objective function is available instead of the usual subgradient. For one of the considered methods, conditions are found under which, in practice, it is possible to escape the projection of the considered iterative sequence on to the feasible set of the problem.

Keywords: subgradient method, sharp minimum, quasi-convex function, weak $\beta$-quasi-convex function, Lipschitz-continuous function, $\delta$-subgradient.

UDC: 519.85

Received: 12.02.2022
Revised: 30.03.2022
Accepted: 05.04.2022

DOI: 10.20537/2076-7633-2022-14-2-473-495



© Steklov Math. Inst. of RAS, 2024