RUS  ENG
Full version
JOURNALS // Proceedings of the Institute for System Programming of the RAS // Archive

Proceedings of ISP RAS, 2024 Volume 36, Issue 5, Pages 109–126 (Mi tisp926)

The defender's dilemma: are defense methods against different attacks on machine learning models compatible?

G. V. Sazonovab, K. S. Lukyanovbcd, I. N. Meleshina

a Lomonosov Moscow State University
b Ivannikov Institute for System Programming of the RAS
c Moscow Institute of Physics and Technology (National Research University)
d Research Center of the Trusted Artificial Intelligence ISP RAS

Abstract: With the increasing use of artificial intelligence (AI) models, more attention is being paid to issues of trust and security in AI systems against various types of threats (evasion attacks, poisoning, membership inference, etc.). In this work, we focus on the task of graph node classification, highlighting it as one of the most complex. To the best of our knowledge, this is the first study exploring the relationship between defense methods for AI models against different types of threats on graph data. Our experiments are conducted on citation and purchase graph datasets. We demonstrate that, in general, it is not advisable to simply combine defense methods for different types of threats, as this can lead to severe negative consequences, including a complete loss of model effectiveness. Furthermore, we provide theoretical proof of the contradiction between defense methods against poisoning attacks on graphs and adversarial training.

Keywords: AI model attacks; security; graph node classification; trusted AI.

DOI: 10.15514/ISPRAS-2024-36(5)-8



© Steklov Math. Inst. of RAS, 2025