AIDB Daily Papers
SWE-bench:言語モデルは現実世界のGitHub Issueを解決できるか?
※ 日本語タイトル・ポイントはAIによる自動生成です。正確な内容は原論文をご確認ください。
ポイント
- 現実のGitHub Issueを基にしたソフトウェアエンジニアリングの評価フレームワークSWE-benchを導入した。
- 現実世界の複雑な問題を解く能力は、言語モデルの限界を示す重要な指標となるため。
- 最高性能のモデルでも解決率は約2%であり、実用的な問題解決にはまだ大きな課題が残る。
Abstract
Language models have outpaced our ability to evaluate them effectively, but for their future development it is essential to study the frontier of their capabilities. We find real-world software engineering to be a rich, sustainable, and challenging testbed for evaluating the next generation of language models. To this end, we introduce SWE-bench, an evaluation framework consisting of $2,294$ software engineering problems drawn from real GitHub issues and corresponding pull requests across $12$ popular Python repositories. Given a codebase along with a description of an issue to be resolved, a language model is tasked with editing the codebase to address the issue. Resolving issues in SWE-bench frequently requires understanding and coordinating changes across multiple functions, classes, and even files simultaneously, calling for models to interact with execution environments, process extremely long contexts and perform complex reasoning that goes far beyond traditional code generation tasks. Our evaluations show that both state-of-the-art proprietary models and our fine-tuned model SWE-Llama can resolve only the simplest issues. The best-performing model, Claude 2, is able to solve a mere $1.96$% of the issues. Advances on SWE-bench represent steps towards LMs that are more practical, intelligent, and autonomous.
Paper AI Chat
この論文のPDF全文を対象にAIに質問できます。
質問の例: