RUS  ENG
Full version
JOURNALS // Doklady Rossijskoj Akademii Nauk. Mathematika, Informatika, Processy Upravlenia // Archive

Dokl. RAN. Math. Inf. Proc. Upr., 2025 Volume 527, Pages 171–181 (Mi danma676)

SPECIAL ISSUE: ARTIFICIAL INTELLIGENCE AND MACHINE LEARNING TECHNOLOGIES

RuWikiBench: evaluating large language models through replication of encyclopedia articles

D. A. Grigor'ev, D. I. Chernyshev

Lomonosov Moscow State University, Research Computing Center

Abstract: In light of the growing interest in using large language models (LLMs) as tools for generating scientific texts, the evaluation of their ability to produce encyclopedic content is becoming increasingly relevant. However, for Russian-language materials this issue has not been sufficiently studied, and existing benchmarks do not cover key aspects of analytical work with sources. This paper presents RuWikiBench – an open benchmark based on Ruwiki for evaluating the ability of large language models to reproduce Wikipedia-style articles, built around three tasks: selection of relevant sources, article structuring, and section generation. The results of testing popular open-source LLMs show that even under ideal conditions, the best models do not always follow the expert logic of composing encyclopedic content: even with a perfect source retrieval system, the models cannot reproduce the reference table of contents, and the quality of section generation shows almost no dependence on the number of parameters.

Keywords: benchmark, wikipedia, ruwiki, large language model.

UDC: 004.8

Received: 19.08.2025
Accepted: 22.09.2025

DOI: 10.7868/S2686954325070148



Bibliographic databases:


© Steklov Math. Inst. of RAS, 2025