AIDB Daily Papers
エージェントフレームワークのバグ解剖:トリガーと失敗モードの実証的研究
※ 日本語タイトル・ポイントはAIによる自動生成です。正確な内容は原論文をご確認ください。
ポイント
- CrewAIやAutoGen等の現代的なエージェントフレームワークにおけるバグを網羅的に調査した。
- 従来のLLMライブラリ研究では捉えきれない、自律的なオーケストレーション特有の複雑性を明らかにする。
- バグのトリガーパターンを自動で特定し、フレームワークを跨いだテストや信頼性向上に貢献する。
Abstract
Modern agentic frameworks (e.g., CrewAI and AutoGen) have evolved into complex, autonomous multi-agent systems, introducing unique reliability challenges beyond earlier pipeline-based LLM libraries. However, existing empirical studies focus on earlier LLM libraries or task-level bugs, leaving the unique complexities of these agentic frameworks unexplored. We bridge the gap by conducting a comprehensive study of 409 fixed bugs from five representative agentic frameworks. We propose a five-layer abstraction to capture structural complexities in agentic frameworks, spanning from orchestration to infrastructure. Our study uncovers specialized symptoms, such as unexpected execution sequences and user configurations ignored, which are unique to autonomous orchestration. We further identify agent-specific root causes, including modelrelated faults, cognitive context mismanagement, and orchestration faults. Statistical analysis reveals cross-framework consistency and significant associations among these bug dimensions. Finally, our automated pattern mining identifies frequent bug-triggering patterns (e.g., model backend-ID combinations), and we show their transferability across different framework designs. Our findings facilitate cross-platform testing and improve the reliability of agentic systems.
Paper AI Chat
この論文のPDF全文を対象にAIに質問できます。
質問の例: