RUS  ENG
Full version
JOURNALS // Zapiski Nauchnykh Seminarov POMI // Archive

Zap. Nauchn. Sem. POMI, 2023 Volume 529, Pages 123–139 (Mi znsl7423)

Pre-training longt5 for vietnamese mass-media multi-document summarization

N. Rusnachenkoa, The Anh Leb, Ngoc Diep Nguyenc

a Bauman Moscow State Technical University
b FPT University, Can Tho, Viet Nam
c CyberIntellect, Moscow, Russia

Abstract: Multi-document summarization is a task aimed to extract the most salient information from a set of input documents. One of the main challenges in this task is the long-term dependency problem. When we deal with texts written in Vietnamese, it is also accompanied by the specific syllable-based text representation and lack of labeled datasets. Recent advances in machine translation have resulted in significant growth in the use of a related architecture known as the Transformer. Being pretrained on large amounts of raw texts, Transformers allow to capture a deep knowledge of the texts. In this paper, we survey the findings of language model applications for text summarization problems, including important Vietnamese text summarization models. According to the latter, we select LongT5 to pretrain and then fine-tune it for the Vietnamese multi-document text summarization problem from scratch. We analyze the resulting model and experiment with multi-document Vietnamese datasets, including ViMs, VMDS, and VLSP2022. We conclude that using a Transformer-based model pretrained on a large amount of unlabeled Vietnamese texts allows us to achieve promising results, with further enhancement via fine-tuning within a small amount of manually summarized texts. The pretrained model utilized in the experiment section has been made available online at https://github.com/nicolay-r/ViLongT5.

Key words and phrases: vietnamese multi-document summarization, text summarization, Transformers, language models.

UDC: 81.322.2

Received: 06.09.2023

Language: English



© Steklov Math. Inst. of RAS, 2024