AIDB Daily Papers
Red-MIRROR:反省的検証と知識拡張型インタラクションを備えた、エージェント型LLM基盤の自律的ペネトレーションテスト
※ 日本語タイトル・ポイントはAIによる自動生成です。正確な内容は原論文をご確認ください。
ポイント
- Red-MIRRORは、LLMを活用し、Retrieval-Augmented Generation (RAG)とShared Recurrent Memory Mechanism (SRMM)を統合した新しい自動ペネトレーションテストシステムを提案。
- 従来手法の知識依存、記憶の断片化、検証不足を克服し、複雑なWebエクスプロイトに対する堅牢なソリューションを提供することが、この研究の重要なポイント。
- XBOWベンチマークで86.0%の成功率を達成し、既存のPentestAgentやAutoPTを大幅に上回る性能を示し、長期的な推論とペイロード改善能力が向上。
Abstract
Web applications remain the dominant attack surface in cybersecurity, where vulnerabilities such as SQL injection, XSS, and business logic flaws continue to cause significant data breaches. While penetration testing is effective for identifying these weaknesses, traditional manual approaches are time-consuming and heavily dependent on scarce expert knowledge. Recent Large Language Models (LLM)-based multi-agent systems have shown promise in automating penetration testing, yet they still suffer from critical limitations: over-reliance on parametric knowledge, fragmented session memory, and insufficient validation of attack payloads and responses. This paper proposes Red-MIRROR, a novel multi-agent automated penetration testing system that introduces a tightly coupled memory-reflection backbone to explicitly govern inter-agent reasoning. By synthesizing Retrieval-Augmented Generation (RAG) for external knowledge augmentation, a Shared Recurrent Memory Mechanism (SRMM) for persistent state management, and a Dual-Phase Reflection mechanism for adaptive validation, Red-MIRROR provides a robust solution for complex web exploitation. Empirical evaluation on the XBOW benchmark and Vulhub CVEs shows that Red-MIRROR achieves performance comparable to state-of-the-art agents on Vulhub scenarios, while demonstrating a clear advantage on the XBOW benchmark. On the XBOW benchmark, Red-MIRROR attains an overall success rate of 86.0 percent, outperforming PentestAgent (50.0 percent), AutoPT (46.0 percent), and the VulnBot baseline (6.0 percent). Furthermore, the system achieves a 93.99 percent subtask completion rate, indicating strong long-horizon reasoning and payload refinement capability. Finally, we discuss ethical implications and propose safeguards to mitigate misuse risks.
Paper AI Chat
この論文のPDF全文を対象にAIに質問できます。
質問の例: