RUS  ENG
Full version
JOURNALS // Computer Research and Modeling // Archive

Computer Research and Modeling, 2019 Volume 11, Issue 2, Pages 205–217 (Mi crm706)

This article is cited in 3 papers

MATHEMATICAL MODELING AND NUMERICAL SIMULATION

On some stochastic mirror descent methods for constrained online optimization problems

M. S. Alkousa

Moscow Institute of Physics and Technology, 9 Institutskiy per., Dolgoprudny, Moscow Region, 141701, Russia

Abstract: The problem of online convex optimization naturally occurs in cases when there is an update of statistical information. The mirror descent method is well known for non-smooth optimization problems. Mirror descentis an extension of the subgradient method for solving non-smooth convex optimization problems in the case of a non-Euclidean distance. This paper is devoted to a stochastic variant of recently proposed Mirror Descent methods for convex online optimization problems with convex Lipschitz (generally, non-smooth) functional constraints. This means that we can still use the value of the functional constraint, but instead of (sub)gradient of the objective functional and the functional constraint, we use their stochastic (sub)gradients. More precisely, assume that on a closed subset of $n$-dimensional vector space, N convex Lipschitz non-smooth functionals are given. The problem is to minimize the arithmetic mean of these functionals with a convex Lipschitz constraint. Two methods are proposed, for solving this problem, using stochastic (sub)gradients: adaptive method (does not require knowledge of Lipschitz constant neither for the objective functional, nor for the functional of constraint) and non-adaptive method (requires knowledge of Lipschitz constant for the objective functional and the functional of constraint). Note that it is allowed to calculate the stochastic (sub)gradient of each functional only once. In the case of non-negative regret, we find that the number of non-productive steps is $O(N)$, which indicates the optimality of the proposed methods. We consider an arbitrary proximal structure, which is essential for decision-making problems. The results of numerical experiments are presented, allowing to compare the work of adaptive and non-adaptive methods for some examples. It is shown that the adaptive method can significantly improve the number of the found solutions.

Keywords: online convex optimization problem, non-smooth constrained optimization problem, adaptive mirror Descent, Lipschitz functional, Stochastic (sub)gradient.

UDC: 519.85

Received: 18.11.2018
Revised: 05.03.2019
Accepted: 06.03.2019

Language: English

DOI: 10.20537/2076-7633-2019-11-2-205-217



© Steklov Math. Inst. of RAS, 2024