AIDB Daily Papers
プロンプトの曖昧さがLLMによるコード生成の正確性を向上させる場合:プロンプトの言葉遣いと構造がコード生成に与える影響を探る
※ 日本語タイトル・ポイントはAIによる自動生成です。正確な内容は原論文をご確認ください。
ポイント
- 本研究は、プロンプトの曖昧さがLLMによるコード生成の正確性に与える影響を、構造的に豊かなデータセットを用いて調査した。
- 従来のベンチマークではプロンプトの曖昧さが正確性を低下させる傾向があったが、本研究では、構造的な冗長性によりその影響が緩和され、場合によっては正確性が向上することも発見した。
- 構造的に豊かなプロンプト設計は、誤解を招く手がかりを排除し、LLMのコード生成における堅牢性を高める実用的な示唆を提供する。
Abstract
Large language models are increasingly used for code generation, yet the correctness of their outputs depends not only on model capability but also on how tasks are specified. Prior studies demonstrate that small changes in natural language prompts, particularly under-specification can substantially reduce code correctness; however, these findings are largely based on minimal-specification benchmarks such as HumanEval and MBPP, where limited structural redundancy may exaggerate sensitivity. In this exploratory study, we investigate how prompt structure, task complexity, and specification richness interact with LLM robustness to prompt mutations. We evaluate 10 different models across HumanEval and the structurally richer LiveCodeBench. Our results reveal that robustness is not a fixed property of LLMs but is highly dependent on prompt structure: the same under-specification mutations that degrade performance on HumanEval have near-zero net effect on LiveCodeBench due to redundancy across descriptions, constraints, examples, and I/O conventions. Surprisingly, we also find that prompt mutations can improve correctness. In LiveCodeBench, under-specification often breaks misleading lexical or structural cues that trigger incorrect retrieval-based solution strategies, leading to correctness improvements that counterbalance degradations. Manual analysis identifies consistent mechanisms behind these improvements, including the disruption of over-fitted terminology, removal of misleading constraints, and elimination of spurious identifier triggers. Overall, our study shows that structurally rich task descriptions can substantially mitigate the negative effects of under-specification and, in some cases, even enhance correctness. We outline categories of prompt modifications that positively influence the behavior of LLM code-generation, offering practical insights for writing robust prompts.
Paper AI Chat
この論文のPDF全文を対象にAIに質問できます。
質問の例: