AIDB Daily Papers
行動レベル操作による具現化LLMの脱獄
※ 日本語タイトル・ポイントはAIによる自動生成です。正確な内容は原論文をご確認ください。
ポイント
- 具現化LLMに対する新たな脆弱性として、行動レベルでの悪用可能性に着目し、攻撃フレームワークBlindfoldを提案した。
- Blindfoldは、局所的な代理LLMを悪用し、表面上安全に見える行動操作を通じて、物理的な悪影響を引き起こす点が新しい。
- シミュレータと実世界のロボットアームでの評価で、Blindfoldは既存手法より高い攻撃成功率を示し、対策の必要性を示唆した。
Abstract
Embodied Large Language Models (LLMs) enable AI agents to interact with the physical world through natural language instructions and actions. However, beyond the language-level risks inherent to LLMs themselves, embodied LLMs with real-world actuation introduce a new vulnerability: instructions that appear semantically benign may still lead to dangerous real-world consequences, revealing a fundamental misalignment between linguistic security and physical outcomes. In this paper, we introduce Blindfold, an automated attack framework that leverages the limited causal reasoning capabilities of embodied LLMs in real-world action contexts. Rather than iterative trial-and-error jailbreaking of black-box embodied LLMs, Blindfold adopts an Adversarial Proxy Planning strategy: it compromises a local surrogate LLM to perform action-level manipulations that appear semantically safe but could result in harmful physical effects when executed. Blindfold further conceals key malicious actions by injecting carefully crafted noise to evade detection by defense mechanisms, and it incorporates a rule-based verifier to improve the attack executability. Evaluations on both embodied AI simulators and a real-world 6DoF robotic arm show that Blindfold achieves up to 53% higher attack success rates than SOTA baselines, highlighting the urgent need to move beyond surface-level language censorship and toward consequence-aware defense mechanisms to secure embodied LLMs.
Paper AI Chat
この論文のPDF全文を対象にAIに質問できます。
質問の例: