AIDB Daily Papers
OpenClawとその派生系の体系的なセキュリティ評価:AIエージェントの新たな脆弱性
※ 日本語タイトル・ポイントはAIによる自動生成です。正確な内容は原論文をご確認ください。
ポイント
- OpenClawシリーズのAIエージェントフレームワーク6種を対象に、セキュリティリスクの体系的な評価を実施した。
- ツール利用AIエージェントは、基盤モデル単体では見過ごされるセキュリティ上の脆弱性を露呈し、リスクが高い。
- 偵察・情報収集が最も一般的な弱点であり、認証情報漏洩や権限昇格など、フレームワーク固有のリスクも判明した。
Abstract
Tool-augmented AI agents substantially extend the practical capabilities of large language models, but they also introduce security risks that cannot be identified through model-only evaluation. In this paper, we present a systematic security assessment of six representative OpenClaw-series agent frameworks, namely OpenClaw, AutoClaw, QClaw, KimiClaw, MaxClaw, and ArkClaw, under multiple backbone models. To support this study, we construct a benchmark of 205 test cases covering representative attack behaviors across the full agent execution lifecycle, enabling unified evaluation of risk exposure at both the framework and model levels. Our results show that all evaluated agents exhibit substantial security vulnerabilities, and that agentized systems are significantly riskier than their underlying models used in isolation. In particular, reconnaissance and discovery behaviors emerge as the most common weaknesses, while different frameworks expose distinct high-risk profiles, including credential leakage, lateral movement, privilege escalation, and resource development. These findings indicate that the security of modern agent systems is shaped not only by the safety properties of the backbone model, but also by the coupling among model capability, tool use, multi-step planning, and runtime orchestration. We further show that once an agent is granted execution capability and persistent runtime context, weaknesses arising in early stages can be amplified into concrete system-level failures. Overall, our study highlights the need to move beyond prompt-level safeguards toward lifecycle-wide security governance for intelligent agent frameworks.
Paper AI Chat
この論文のPDF全文を対象にAIに質問できます。
質問の例: