AIDB Daily Papers
LLMを賢く導く階層型記憶システム「HiGMem」:長期対話エージェントの記憶管理を革新
※ 日本語タイトル・ポイントはAIによる自動生成です。正確な内容は原論文をご確認ください。
ポイント
- 長期対話エージェントの記憶システムにおいて、関連情報を効率的に検索し、不要な文脈を排除するHiGMemを提案した。
- 従来のベクトル類似度のみに頼る検索手法の課題を解決し、イベント要約を足がかりに重要な対話履歴を予測する。
- LoCoMo10ベンチマークで高い精度を示し、検索対象を大幅に削減しながら、より的確な回答生成を可能にした。
Abstract
Long-term conversational large language model (LLM) agents require memory systems that can recover relevant evidence from historical interactions without overwhelming the answer stage with irrelevant context. However, existing memory systems, including hierarchical ones, still often rely solely on vector similarity for retrieval. It tends to produce bloated evidence sets: adding many superficially similar dialogue turns yields little additional recall, but lowers retrieval precision, increases answer-stage context cost, and makes retrieved memories harder to inspect and manage. To address this, we propose HiGMem (Hierarchical and LLM-Guided Memory System), a two-level event-turn memory system that allows LLMs to use event summaries as semantic anchors to predict which related turns are worth reading. This allows the model to inspect high-level event summaries first and then focus on a smaller set of potentially useful turns, providing a concise and reliable evidence set through reasoning, while avoiding the retrieval overhead that would be excessively high compared to vector retrieval. On the LoCoMo10 benchmark, HiGMem achieves the best F1 on four of five question categories and improves adversarial F1 from 0.54 to 0.78 over A-Mem, while retrieving an order of magnitude fewer turns. Code is publicly available at https://github.com/ZeroLoss-Lab/HiGMem.
Paper AI Chat
この論文のPDF全文を対象にAIに質問できます。
質問の例: