AIDB Daily Papers
ソフトウェアエンジニアリングにおける自律型AIの再現性、説明可能性、効果的な評価
※ 日本語タイトル・ポイントはAIによる自動生成です。正確な内容は原論文をご確認ください。
ポイント
- ソフトウェアエンジニアリング(SE)の課題解決に自律型AIを活用する研究動向と評価の実態を分析しました。
- LLMのブラックボックス性や評価設計の不備が、AIの優位性を示す妨げになっている点を指摘し、改善の必要性を示唆します。
- 再現性、説明可能性、効果的な評価を可能にするため、思考・行動・結果(TAR)の軌跡などの公開を推奨します。
Abstract
With the advancement of Agentic AI, researchers are increasingly leveraging autonomous agents to address challenges in software engineering (SE). However, the large language models (LLMs) that underpin these agents often function as black boxes, making it difficult to justify the superiority of Agentic AI approaches over baselines. Furthermore, missing information in the evaluation design description frequently renders the reproduction of results infeasible. To synthesize current evaluation practices for Agentic AI in SE, this study analyzes 18 papers on the topic, published or accepted by ICSE 2026, ICSE 2025, FSE 2025, ASE 2025, and ISSTA 2025. The analysis identifies prevailing approaches and their limitations in evaluating Agentic AI for SE, both in current research and potential future studies. To address these shortcomings, this position paper proposes a set of guidelines and recommendations designed to empower reproducible, explainable, and effective evaluations of Agentic AI in software engineering. In particular, we recommend that Agentic AI researchers make their Thought-Action-Result (TAR) trajectories and LLM interaction data, or summarized versions of these artifacts, publicly accessible. Doing so will enable subsequent studies to more effectively analyze the strengths and weaknesses of different Agentic AI approaches. To demonstrate the feasibility of such comparisons, we present a proof-of-concept case study that illustrates how TAR trajectories can support systematic analysis across approaches.
Paper AI Chat
この論文のPDF全文を対象にAIに質問できます。
質問の例: