RUS  ENG
Full version
JOURNALS // Matematicheskie Zametki // Archive

Mat. Zametki, 2020 Volume 108, Issue 4, Pages 515–528 (Mi mzm12751)

This article is cited in 7 papers

Accelerated and Unaccelerated Stochastic Gradient Descent in Model Generality

D. M. Dvinskikhabc, A. I. Turind, A. V. Gasnikovbcd, S. S. Omelchenkob

a Weierstrass Institute
b Moscow Institute of Physics and Technology (National Research University), Dolgoprudny, Moscow Region
c Institute for Information Transmission Problems of the Russian Academy of Sciences (Kharkevich Institute), Moscow
d National Research University "Higher School of Economics", Moscow

Abstract: A new method for deriving estimates of the rate of convergence of optimal methods for solving problems of smooth (strongly) convex stochastic optimization is described. The method is based on the results of stochastic optimization derived from results on the convergence of optimal methods under the conditions of inexact gradients with small noises of nonrandom nature. In contrast to earlier results, all estimates in the present paper are obtained in model generality.

Keywords: stochastic optimization, accelerated gradient descent, model generality, composite optimization.

UDC: 519.85

Received: 11.04.2020
Revised: 20.05.2020

DOI: 10.4213/mzm12751


 English version:
Mathematical Notes, 2020, 108:4, 511–522

Bibliographic databases:


© Steklov Math. Inst. of RAS, 2025