AIDB Daily Papers
自律型LLMエージェントのための記憶:メカニズム、評価、そして新たなフロンティア
※ 日本語タイトル・ポイントはAIによる自動生成です。正確な内容は原論文をご確認ください。
ポイント
- LLMエージェントにおける記憶の設計、実装、評価方法を体系的に解説し、エージェントを適応的に進化させる記憶の重要性を強調した。
- 時間的範囲、表現基盤、制御ポリシーの3次元分類を導入し、文脈内圧縮、検索拡張ストアなど5つの主要な記憶メカニズムを詳細に分析した点が新しい。
- マルチセッションエージェントテストで現在のシステムにおける課題を明らかにし、パーソナルアシスタントやゲームなど記憶が重要な応用事例を調査した。
Abstract
Large language model (LLM) agents increasingly operate in settings where a single context window is far too small to capture what has happened, what was learned, and what should not be repeated. Memory -- the ability to persist, organize, and selectively recall information across interactions -- is what turns a stateless text generator into a genuinely adaptive agent. This survey offers a structured account of how memory is designed, implemented, and evaluated in modern LLM-based agents, covering work from 2022 through early 2026. We formalize agent memory as a emph{write--manage--read} loop tightly coupled with perception and action, then introduce a three-dimensional taxonomy spanning temporal scope, representational substrate, and control policy. Five mechanism families are examined in depth: context-resident compression, retrieval-augmented stores, reflective self-improvement, hierarchical virtual context, and policy-learned management. On the evaluation side, we trace the shift from static recall benchmarks to multi-session agentic tests that interleave memory with decision-making, analyzing four recent benchmarks that expose stubborn gaps in current systems. We also survey applications where memory is the differentiating factor -- personal assistants, coding agents, open-world games, scientific reasoning, and multi-agent teamwork -- and address the engineering realities of write-path filtering, contradiction handling, latency budgets, and privacy governance. The paper closes with open challenges: continual consolidation, causally grounded retrieval, trustworthy reflection, learned forgetting, and multimodal embodied memory.
Paper AI Chat
この論文のPDF全文を対象にAIに質問できます。
質問の例: