AIDB Daily Papers
LLMの悪用とAIマルウェアから秘匿カナリアで防御:情報漏洩を水際で阻止
※ 日本語タイトル・ポイントはAIによる自動生成です。正確な内容は原論文をご確認ください。
ポイント
- AIマルウェアがLLMを偵察やコード生成に悪用する脅威に対し、秘匿カナリアファイルを用いた防御フレームワークを提案。
- 既存の防御がエンドポイント中心なのに対し、本研究はLLMサービスへの入力前に検出する初の試みであり、AIマルウェアに特化した脅威分類を導入。
- 実験では、提案手法がファイル暗号化前にカナリアファイルを含むアップロードを検出し、AIマルウェアによる情報漏洩を効果的に阻止することを確認。
Abstract
AI-powered malware increasingly exploits cloud-hosted generative-AI services and large language models (LLMs) as analysis engines for reconnaissance and code generation. Simultaneously, enterprise uploads expose sensitive documents to third-party AI vendors. Both threats converge at the AI service ingestion boundary, yet existing defenses focus on endpoints and network perimeters, leaving organizations with limited visibility once plaintext reaches an LLM service. To address this, we present a framework based on steganographic canary files: realistic documents carrying cryptographically derived identifiers embedded via complementary encoding channels. A pre-ingestion filter extracts and verifies these identifiers before LLM processing, enabling passive, format-agnostic detection without semantic classification. We support two modes of operation where Mode A marks existing sensitive documents with layered symbolic encodings (whitespace substitution, zero-width character insertion, homoglyph substitution), while Mode B generates synthetic canary documents using linguistic steganography (arithmetic coding over GPT-2), augmented with compatible symbolic layers. We model increasing document pre-processing and adversarial capability for both modes via a four-tier transport-transform taxonomy: All methods achieve 100% identifier recovery under benign and sanitization workflows (Tiers 1-2). The hybrid Mode B maintains 97% through targeted adversarial transforms (Tier 3). An end-to-end case study against an LLM-orchestrated ransomware pipeline confirms that both modes detect and block canary-bearing uploads before file encryption begins. To our knowledge, this is the first framework to systematically combine symbolic and linguistic text steganography into layered canary documents for detecting unauthorized LLM processing, evaluated against a transport-threat taxonomy tailored to AI malware.
Paper AI Chat
この論文のPDF全文を対象にAIに質問できます。
質問の例: