AIDB Daily Papers
LLMの記憶力を劇的に向上させる「睡眠と忘却」メカニズム
※ 日本語タイトル・ポイントはAIによる自動生成です。正確な内容は原論文をご確認ください。
ポイント
- 人間の記憶メカニズムに着想を得た新しいLLM用メモリアーキテクチャ「SCM」を提案した。
- 既存手法の限界を克服し、持続的で構造化された、生物学的に妥当な記憶システムを実現する点で重要である。
- 10ターン会話で完璧な記憶再現精度を達成し、メモリノイズを90.9%削減、検索遅延も1ms未満に抑えた。
Abstract
We present SCM (Sleep-Consolidated Memory), a research preview of a memory architecture for large language models that draws on neuroscientific principles to address a fundamental limitation in current systems: the absence of persistent, structured, and biologically plausible memory. Existing approaches rely on truncating context windows, growing vector databases without bound, or tiered storage systems that lack consolidation and forgetting mechanisms. SCM implements five core components inspired by human memory: a limited-capacity working memory, multi-dimensional importance tagging, offline sleep-stage consolidation with distinct NREM and REM phases, intentional value-based forgetting, and a computational self-model enabling introspection. Across a standardized benchmark suite of eight tests, the prototype achieves perfect recall accuracy over ten-turn conversations while reducing memory noise by 90.9% through adaptive forgetting. Memory search latency remains below one millisecond even with hundreds of stored concepts. This work establishes the architectural foundations for memory systems that consolidate, prioritize, and forget, offering a testable platform for advancing LLM memory research.
Paper AI Chat
この論文のPDF全文を対象にAIに質問できます。
質問の例: