AIDB Daily Papers
YC-Bench:長期計画と一貫性実行のためのAIエージェントベンチマーク
※ 日本語タイトル・ポイントはAIによる自動生成です。正確な内容は原論文をご確認ください。
ポイント
- LLMエージェントの長期的な戦略的整合性を評価するベンチマークYC-Benchを提案した。
- 部分観測可能な環境で、エージェントがスタートアップを運営し、長期的な意思決定能力を試す点が新しい。
- Claude OpusとGLM-5が優れた性能を示したが、過剰な並列化など、課題も明らかになった。
Abstract
As LLM agents tackle increasingly complex tasks, a critical question is whether they can maintain strategic coherence over long horizons: planning under uncertainty, learning from delayed feedback, and adapting when early mistakes compound. We introduce $texttt{YC-Bench}$, a benchmark that evaluates these capabilities by tasking an agent with running a simulated startup over a one-year horizon spanning hundreds of turns. The agent must manage employees, select task contracts, and maintain profitability in a partially observable environment where adversarial clients and growing payroll create compounding consequences for poor decisions. We evaluate 12 models, both proprietary and open source, across 3 seeds each. Only three models consistently surpass the starting capital of $200K, with Claude Opus 4.6 achieving the highest average final funds at $1.27 M, followed by GLM-5 at $1.21 M at 11$times$ lower inference cost. Scratchpad usage, the sole mechanism for persisting information across context truncation, is the strongest predictor of success, and adversarial client detection is the primary failure mode, accounting for $47%$ of bankruptcies. Our analysis reveals that frontier models still fail through distinct failure modes such as over-parallelization, demonstrating the capability gaps for long-horizon performance. $texttt{YC-Bench}$ is open-source, reproducible, and configurable.
Paper AI Chat
この論文のPDF全文を対象にAIに質問できます。
質問の例: