RUS  ENG
Full version
JOURNALS // Proceedings of the Institute for System Programming of the RAS // Archive

Proceedings of ISP RAS, 2023 Volume 35, Issue 6, Pages 179–188 (Mi tisp840)

Security analysis of the draft national standard «Neural network algorithms in protected execution. Automatic training of neural network models on small samples in classification tasks»

G. B. Marshalko, R. A. Romanenkov, J. A. Trufanova

Technical committee on standardization "Cryptography and Security Mechanisms"

Abstract: We propose a membership inference attack against the neural classification algorithm from the draft national standard developed by the Omsk State Technical University under the auspices of the Technical Committee on Standardization «Artificial Intelligence» (TC 164). The attack allows us to determine whether the data were used for neural network training, and aimed at violating the confidentiality property of the training set. The results show that the protection mechanism of neural network classifiers described by the draft national standard does not provide the declared properties. The results were previously announced at Ruscrypto’2023 conference.

Keywords: Statistic classification, neural networks, informational security, training set, membership inference attack, confidentiality, standardization

DOI: 10.15514/ISPRAS-2023-35(6)-11



© Steklov Math. Inst. of RAS, 2024