Abstract:
Large language models (LLMs) have significantly advanced code generation tasks by enabling natural language-to-code translation. However, the effectiveness of these models is highly dependent on prompt engineering - the practice of crafting input prompts that guide model behavior. While prior surveys have explored prompt engineering across general NLP applications, they provide limited insights into its role in code generation. In this survey, we examine 19 prompt engineering strategies specifically designed for code synthesis. We introduce a functional taxonomy dividing these strategies into simple and complex categories, and propose a penalty-based evaluation framework that quantifies the trade-off between model performance and resource consumption. Our analysis consolidates fragmented findings, identifies emerging patterns, and offers actionable guidance for practitioners aiming to optimize LLM-driven code generation. This work establishes a foundation for future research on adaptive and cost-efficient prompting methods in program synthesis.