AIDB Daily Papers
PlanGuard:計画に基づく一貫性検証による間接的なプロンプトインジェクションからのエージェント保護
※ 日本語タイトル・ポイントはAIによる自動生成です。正確な内容は原論文をご確認ください。
ポイント
- 大規模言語モデルエージェントを間接的プロンプトインジェクションから守るPlanGuardを提案。
- ユーザー指示のみから生成された行動計画と実際のエージェントの行動を比較し、攻撃を検出する。
- InjecAgentベンチマークで攻撃成功率を72.8%から0%に削減し、高い防御性能を示した。
Abstract
Large Language Model (LLM) agents are increasingly integrated into critical systems, leveraging external tools to interact with the real world. However, this capability exposes them to Indirect Prompt Injection (IPI), where attackers embed malicious instructions into retrieved content to manipulate the agent into executing unauthorized or unintended actions. Existing defenses predominantly focus on the pre-processing stage, neglecting the monitoring of the model's actual behavior. In this paper, we propose PlanGuard, a training-free defense framework based on the principle of Context Isolation. Unlike prior methods, PlanGuard introduces an isolated Planner that generates a reference set of valid actions derived solely from user instructions. In addition, we design a Hierarchical Verification Mechanism that first enforces strict hard constraints to block unauthorized tool invocations, and subsequently employs an Intent Verifier to validate whether parameter deviations are benign formatting variances or malicious hijacking. Experiments on the InjecAgent benchmark demonstrate that PlanGuard effectively neutralizes these attacks, reducing the Attack Success Rate (ASR) from 72.8% to 0%, while maintaining an acceptable False Positive Rate of 1.49%. Furthermore, our method is model-agnostic and highly compatible.
Paper AI Chat
この論文のPDF全文を対象にAIに質問できます。
質問の例: