AIDB Daily Papers
SkillReducer:LLMエージェントのスキルをトークン効率のために最適化
※ 日本語タイトル・ポイントはAIによる自動生成です。正確な内容は原論文をご確認ください。
ポイント
- LLMエージェントの能力拡張に不可欠なスキル(命令セット)の非効率性を大規模調査で特定しました。
- SkillReducerは、不要な情報を削減し、必要な情報を効率的に提供する二段階最適化フレームワークです。
- SkillReducerは、スキル品質を向上させつつ、記述と本文を大幅に圧縮し、トークン効率を高めることを実証しました。
Abstract
LLM-based coding agents rely on emph{skills}, pre-packaged instruction sets that extend agent capabilities, yet every token of skill content injected into the context window incurs both monetary cost and attention dilution. To understand the severity of this problem, we conduct a large-scale empirical study of 55,315 publicly available skills and find systemic inefficiencies: 26.4% lack routing descriptions entirely, over 60% of body content is non-actionable, and reference files can inject tens of thousands of tokens per invocation. Motivated by these findings, we present textsc{SkillReducer}, a two-stage optimization framework. Stage~1 optimizes the routing layer by compressing verbose descriptions and generating missing ones via adversarial delta debugging. Stage~2 restructures skill bodies through taxonomy-driven classification and progressive disclosure, separating actionable core rules from supplementary content loaded on demand, validated by faithfulness checks and a self-correcting feedback loop. Evaluated on 600 skills and the SkillsBench benchmark, textsc{SkillReducer} achieves 48% description compression and 39% body compression while improving functional quality by 2.8%, revealing a emph{less-is-more} effect where removing non-essential content reduces distraction in the context window. These benefits transfer across five models from four families with a mean retention of 0.965, and generalize to an independent agent framework.
Paper AI Chat
この論文のPDF全文を対象にAIに質問できます。
質問の例: