AIDB Daily Papers
SkVM:スキルをコンパイルして、どこでも効率的に実行
※ 日本語タイトル・ポイントはAIによる自動生成です。正確な内容は原論文をご確認ください。
ポイント
- LLMエージェントにおけるスキルの再利用性を高めるため、スキルをコードとして扱い、コンパイラ設計の着想を得た。
- スキル要件をプリミティブな能力に分解し、モデルと環境の組み合わせがどれだけサポートできるかを測定することで、移植性を実現する。
- SkVMは、異なるモデルと環境でタスク完了率を向上させ、トークン消費を削減、並列処理による高速化と低遅延化を達成した。
Abstract
LLM agents increasingly adopt skills as a reusable unit of composition. While skills are shared across diverse agent platforms, current systems treat them as raw context, causing the same skill to behave inconsistently for different agents. This fragility undermines skill portability and execution efficiency. To address this challenge, we analyze 118,000 skills and draw inspiration from traditional compiler design. We treat skills as code and LLMs as heterogeneous processors. To make portability actionable, we decompose a skill's requirements into a set of primitive capabilities, and measure how well each model-harness pair supports them. Based on these capability profiles, we propose SkVM, a compilation and runtime system designed for portable and efficient skill execution. At compile time, SkVM performs capability-based compilation, environment binding, and concurrency extraction. At runtime, SkVM applies JIT code solidification and adaptive recompilation for performance optimization. We evaluate SkVM across eight LLMs of varying scales and three agent harnesses, covering SkillsBench and representative skill tasks. Results demonstrate that SkVM significantly improves task completion rates across different models and environments while reducing token consumption by up to 40%. In terms of performance, SkVM achieves up to 3.2x speedup with enhanced parallelism, and 19-50x latency reduction through code solidification.
Paper AI Chat
この論文のPDF全文を対象にAIに質問できます。
質問の例: