Abstract:
Sharing scientific knowledge in the community is an important endeavor. However, most papers are written in English, which makes dissemination of knowledge in countries where English is not spoken by the majority of people harder. Nowadays, machine translation and language models may help to solve this problem, but it is still complicated to train and evaluate models in languages other than English with no or little data in the required language. To address this, we propose the first benchmark for evaluating models on scientific texts in Russian. It consists of papers from Russian electronic library of scientific publications. We also present a set of tasks which can be used to fine-tune various models on our data and provide a detailed comparison between state-of-the-art models on our benchmark.
Keywords:dataset collection, benchmarking, large language models (LLM), LLM evaluation, representation learning.