RUS  ENG
Full version
JOURNALS // Uchenyye zapiski UlGU. Seriya "Matematika i informatsionnyye tekhnologii" // Archive

Uchenyye zapiski UlGU. Seriya "Matematika i informatsionnyye tekhnologii", 2025 Issue 1, Pages 31–40 (Mi ulsu211)

Machine learning of neural networks: methods and experiments

D. O. Ivanov, Yu. G. Savinov

Ulyanovsk State University, Russia

Abstract: The paper discusses modern approaches to transfer learning (fine-tuning) of neural networks to improve quality with a small amount of data. The theoretical foundations of fine-tuning are presented, including regularization methods (dropout, L2), learning rate adaptation and parametrically efficient fine-tuning (LoRA). An experiment on the task of classifying the tone of restaurant reviews (based on Russian-language Yandex data) using Zero-Shot, Feature Extraction, Fine- Tuning and LoRA methods is conducted. Code examples and results (tabular and graphical) of model accuracy comparison are presented. The analysis of the results shows that the LoRA method provides the highest accuracy at significantly lower computational load, while Zero-Shot is inferior to other methods. Recommendations on the choice of fine-tuning methods for problems on small data are given.

Keywords: neural networks fine-tuning; transfer learning; fine-tuning; regularization; dropout; L2-regularization; adaptive learning rate; LoRA; Zero-Shot; Feature Extraction

UDC: 004.032.2

Received: 16.06.2025



Bibliographic databases:


© Steklov Math. Inst. of RAS, 2025