RUS  ENG
Full version
JOURNALS // Proceedings of the Institute for System Programming of the RAS // Archive

Proceedings of ISP RAS, 2025 Volume 37, Issue 6(4), Pages 175–186 (Mi tisp1111)

A preliminary analysis of prompt engineering in large language models for code generation

Ya. O. Yudinskikh, V. V. Ivanov

Innopolis University

Abstract: Large language models (LLMs) have significantly advanced code generation tasks by enabling natural language-to-code translation. However, the effectiveness of these models is highly dependent on prompt engineering - the practice of crafting input prompts that guide model behavior. While prior surveys have explored prompt engineering across general NLP applications, they provide limited insights into its role in code generation. In this survey, we examine 19 prompt engineering strategies specifically designed for code synthesis. We introduce a functional taxonomy dividing these strategies into simple and complex categories, and propose a penalty-based evaluation framework that quantifies the trade-off between model performance and resource consumption. Our analysis consolidates fragmented findings, identifies emerging patterns, and offers actionable guidance for practitioners aiming to optimize LLM-driven code generation. This work establishes a foundation for future research on adaptive and cost-efficient prompting methods in program synthesis.

Keywords: LLM, code generation, prompt engineering.

Language: English

DOI: 10.15514/ISPRAS-2025-37(6)-57



© Steklov Math. Inst. of RAS, 2026