AIDB Daily Papers
AIエージェントは環境を探索するが、重要な発見を無視する:LLMには環境への好奇心が欠如
※ 日本語タイトル・ポイントはAIによる自動生成です。正確な内容は原論文をご確認ください。
ポイント
- 本研究は、LLMベースのエージェントが環境からの予期せぬ情報を発見しても、それを活用する能力に欠けることを実証した。
- この「環境への好奇心」の欠如は、エージェントのツール、計算リソース、学習データ分布に影響されることが明らかになった。
- エージェントは期待される情報を取得する能力はあるが、戦略の修正や有用な刺激の最大活用には至らないことが判明した。
Abstract
LLM-based agents are assumed to integrate environmental observations into their reasoning: discovering highly relevant but unexpected information should naturally lead to a model exploiting its own discoveries. We show that this assumption is false for current LLM-based agents, which struggle to reflect or react to unexpected information. Across three benchmarks (Terminal-Bench, SWE-Bench, AppWorld), we inject complete task solutions into the agent environments to deliberately expose a task's solution to a model. While agents discover these solutions on Terminal-Bench in 79-81% of runs, they interact, or exploit, them in only 37-50% of cases. This gap is starkest in AppWorld: agents see documentation stating that a command "returns the complete solution to this task" in over 90% of attempts but exploit this in fewer than 7% of trials. We show that agents lack what we call environmental curiosity: the capability to recognize and investigate unexpected but relevant observations in response to environmental stimuli. We identify three main factors influencing environmental curiosity: available tools in the agent scaffold, test-time compute, and training data distribution. Our findings identify configurations that maximize curiosity also achieve the best performance on the unmodified benchmarks. Yet even jointly optimized agents still ignore discovered solutions in the majority of trials: current agents use the environment to fetch expected information, but not to revise their strategy or maximally exploit useful stimuli.
Paper AI Chat
この論文のPDF全文を対象にAIに質問できます。
質問の例: