AIDB Daily Papers
AIエージェントによるテスト:生成頻度、品質、カバレッジの実証的研究
※ 日本語タイトル・ポイントはAIによる自動生成です。正確な内容は原論文をご確認ください。
ポイント
- AIDevデータセットを用いて、AIエージェントによるテスト生成の実態を分析しました。
- AI生成テストは人間作成テストと比較して、構造的な特徴やコードカバレッジへの影響が異なります。
- AIはテスト追加コミットの16.4%を生成し、人間と同等のコードカバレッジを達成しました。
Abstract
Agent-based coding tools have transformed software development practices. Unlike prompt-based approaches that require developers to manually integrate generated code, these agent-based tools autonomously interact with repositories to create, modify, and execute code, including test generation. While many developers have adopted agent-based coding tools, little is known about how these tools generate tests in real-world development scenarios or how AI-generated tests compare to human-written ones. This study presents an empirical analysis of test generation by agent-based coding tools using the AIDev dataset. We extracted 2,232 commits containing test-related changes and investigated three aspects: the frequency of test additions, the structural characteristics of the generated tests, and their impact on code coverage. Our findings reveal that (i) AI authored 16.4% of all commits adding tests in real-world repositories, (ii) AI-generated test methods exhibit distinct structural patterns, featuring longer code and a higher density of assertions while maintaining lower cyclomatic complexity through linear logic, and (iii) AI-generated tests contribute to code coverage comparable to human-written tests, frequently achieving positive coverage gains across several projects.
Paper AI Chat
この論文のPDF全文を対象にAIに質問できます。
質問の例: