RUS  ENG
Full version
JOURNALS // Vestnik Yuzhno-Ural'skogo Universiteta. Seriya Matematicheskoe Modelirovanie i Programmirovanie // Archive

Vestnik YuUrGU. Ser. Mat. Model. Progr., 2020 Volume 13, Issue 1, Pages 118–128 (Mi vyuru535)

This article is cited in 4 papers

Programming and Computer Software

Special aspects of matrix operation implementations for low-precision neural network model on the Elbrus platform

E. E. Limonovaab, M. I. Neiman-zadec, V. L. Arlazarova

a Federal Research Center “Computer Science and Control” of the Russian Academy of Sciences, Moscow, Russian Federation
b Smart Engines Service LLC, Moscow, Russian Federation
c JSC “MCST”, Moscow, Russian Federation

Abstract: This paper investigates the possibility of effective implementation of calculations in low-precision neural network models on the Elbrus platform with the VLIW architecture. Such models are widely used in practice to increase the computational efficiency of recognition and well suit computers with the x86 and ARM architectures. In this paper, we consider an 8-bit neural network model, in which matrix multiplication is the most resource-intensive part of the implementation. This paper presents an effective implementation of matrix multiplication that takes into account the features of the Elbrus architecture: the presence of several computational channels with various arithmetic and logic devices, an array prefetch buffer, and its own SIMD extension. We carry out theoretical and experimental comparisons of the computational efficiency of low-precision and classical neural network models, which show that Elbrus processors have much more capabilities for performing fast floating point calculations and require the development of new approaches to increase the computational efficiency of neural network models.

Keywords: low-precision neural networks, computational efficiency, Elbrus architecture, matrix operations.

UDC: 004.93

MSC: 68T10

Received: 07.10.2019

Language: English

DOI: 10.14529/mmp200109



© Steklov Math. Inst. of RAS, 2024