AIDB Daily Papers
AIによるコード評価の落とし穴:LLM-as-a-Judgeのバイアスを検証する
※ 日本語タイトル・ポイントはAIによる自動生成です。正確な内容は原論文をご確認ください。
ポイント
- 本研究では、ソフトウェアエンジニアリングにおけるコード評価にLLMを「ジャッジ」として利用する際の信頼性とバイアスを測定・分析した。
- LLMジャッジは、プロンプトのわずかな変更や提示方法によって評価結果が大きく変動し、バイアスの影響を受けやすいことが明らかになった。
- これらのバイアスは、モデルの性能評価の妥当性と再現性を脅かすため、評価時にはバイアス感受性も併せて報告すべきであると提言する。
Abstract
Large Language Models are increasingly used as judges to evaluate code artifacts when exhaustive human review or executable test coverage is unavailable. LLM-judge is increasingly relevant in agentic software engineering workflows, where it can help rank candidate solutions and guide patch selection. While attractive for scale, current practice lacks a principled account of reliability and bias: repeated evaluations of the same case can disagree; small prompt edits can swing outcomes; and seemingly semantics-preserving, human-equivalent perturbations may elicit divergent verdicts. This paper studies LLM-as-a-Judge for code through a measurement-first lens. We analyze two pointwise judging regimes across code generation, code repair task, and test generation, and we systematically probe prompt-induced biases. Our study considers difficulty levels for repeated runs and controlled prompt interventions that isolate one presentation cue at a time, and it evaluates judges using consistency and sensitivity to bias. We find that judge decisions are highly sensitive to prompt biases even when the underlying code snippet is unchanged. Across all three tasks, several biases systematically shift preferences toward the option favored by the prompt, improving accuracy when that option aligns with the gold answer but substantially reducing it otherwise. In some settings, these effects are large enough to change task-level conclusions and alter relative model rankings. These findings show that reported judge performance may reflect prompt artifacts rather than stable assessment ability, posing a direct threat to the validity and reproducibility of code evaluation. We therefore argue that LLM-as-a-Judge studies should report bias sensitivity alongside accuracy and incorporate explicit controls to support more trustworthy model comparison in software engineering.
Paper AI Chat
この論文のPDF全文を対象にAIに質問できます。
質問の例: