RUS  ENG
Full version
JOURNALS // Artificial Intelligence and Decision Making // Archive

Artificial Intelligence and Decision Making, 2024 Issue 3, Pages 32–41 (Mi iipr596)

Machine learning, neural networks

Causes of content distortion: analysis and classification of hallucinations in large GPT language models

M. Sh. Madzhumder, D. D. Begunova

Moscow State Linguistic University, Moscow, Russia

Abstract: The article examines hallucinations produced by two versions of the GPT large language model – GPT-3.5-turbo and GPT-4. The primary aim of the study is to investigate the possible origins and classification of hallucinations as well as to develop strategies to address them. The work reveals the existing challenges that can lead to the generation of content that doesn't correspond to the factual data and misleads users. Detection and elimination of hallucinations play an important role in the development of artificial intelligence by improving natural language processing capabilities. The results of the study have practical relevance for developers and users of language models, due to the provided approaches that improve the quality and reliability of the generated content.

Keywords: AI system hallucinations, GPT, large language models, artificial intelligence.

DOI: 10.14357/20718594240303



Bibliographic databases:


© Steklov Math. Inst. of RAS, 2025