RUS  ENG
Full version
JOURNALS // Zhurnal Vychislitel'noi Matematiki i Matematicheskoi Fiziki // Archive

Zh. Vychisl. Mat. Mat. Fiz., 2023 Volume 63, Number 9, Pages 1458–1512 (Mi zvmmf11614)

This article is cited in 8 papers

Optimal control

Gradient-free federated learning methods with $l_1$ and $l_2$-randomization for non-smooth convex stochastic optimization problems

B. A. Alashkara, A. V. Gasnikovabc, D. M. Dvinskikhd, A. V. Lobanovaef

a Moscow Institute of Physics and Technology, Dolgoprudny, Russia
b Institute for Information Transmission Problems RAS, Moscow, Russia
c Caucasus Mathematical Center, Adyghe State University, Maikop, Russia
d National Research University Higher School of Economics, Moscow, Russia
e ISP RAS Research Center for Trusted Artificial Intelligence, Moscow, Russia
f Moscow Aviation Institute, Moscow, Russia

Abstract: This paper studies non-smooth problems of convex stochastic optimization. Using the smoothing technique based on the replacement of the function value at the considered point by the averaged function value over a ball (in $l_1$-norm or $l_2$-norm) of a small radius centered at this point, and then the original problem is reduced to a smooth problem (whose Lipschitz constant of the gradient is inversely proportional to the radius of the ball). An essential property of the smoothing used is the possibility of calculating an unbiased estimation of the gradient of a smoothed function based only on realizations of the original function. The obtained smooth stochastic optimization problem is proposed to be solved in a distributed federated learning architecture (the problem is solved in parallel: nodes make local steps, e.g. stochastic gradient descent, then communicate–all with all, then all this is repeated). The goal of the article is to build on the basis of modern achievements in the field of gradient–free non-smooth optimization and in the field of federated learning gradient-free methods for solving problems of non-smooth stochastic optimization in the architecture of federated learning.

Key words: gradient-free methods, inexact oracle, federated learning.

UDC: 519.85

Received: 18.11.2022
Revised: 20.05.2023
Accepted: 29.05.2023

DOI: 10.31857/S0044466923090028


 English version:
Computational Mathematics and Mathematical Physics, 2023, 63:9, 1600–1653

Bibliographic databases:


© Steklov Math. Inst. of RAS, 2024