RUS  ENG
Full version
JOURNALS // Vestnik Tomskogo Gosudarstvennogo Universiteta. Matematika i Mekhanika // Archive

Vestn. Tomsk. Gos. Univ. Mat. Mekh., 2018 Number 55, Pages 22–37 (Mi vtgu668)

This article is cited in 3 papers

MATHEMATICS

On the convergence rate of the subgradient method with metric variation and its applications in neural network approximation schemes

V. N. Krutikov, N. S. Samoilenko

Kemerovo State University

Abstract: In this paper, the relaxation subgradient method with rank 2 correction of metric matrices is studied. It is proven that, on high-convex functions, in the case of the existence of a linear coordinate transformation reducing the degree of the task casualty, the method has a linear convergence rate corresponding to the casualty degree. The paper offers a new efficient tool for choosing the initial approximation of an artificial neural network. The use of regularization allowed excluding the overfitting effect and efficiently deleting low-significant neurons and intra-neural connections. The ability to efficiently solve such problems is ensured by the use of the subgradient method with metric matrix rank 2 correction. It has been experimentally proved that the convergence rate of the quasi-Newton method and that of the method under research are virtually equivalent on smooth functions. The method has a high convergence rate on non-smooth functions as well. The method's computing capabilities are used to build efficient neural network learning algorithms. The paper describes an artificial neural network learning algorithm which, together with the redundant neuron suppression, allows obtaining reliable approximations in one count.

Keywords: method, subgradient, minimization, rate of convergence, neural networks, regularization.

UDC: 519.6

MSC: 65K05, 90C30, 82C32

Received: 31.03.2018

DOI: 10.17223/19988621/55/3



Bibliographic databases:


© Steklov Math. Inst. of RAS, 2024