AIDB Daily Papers
テキストを通じた連携:マルチエージェント推論のための洞察共有
※ 日本語タイトル・ポイントはAIによる自動生成です。正確な内容は原論文をご確認ください。
ポイント
- 複数のAIエージェントが、タスクごとに独立して推論プロセスを共有し、洞察ライブラリを共同で構築するフレームワークを提案した。
- 勾配最適化や教師信号に頼らず、意味レベルで推論 traces を集約・蒸留することで、タスク横断的な知識共有を実現する点が新規的である。
- 数学問題解決や機械学習研究などの応用で、精度を平均24%向上させ、推論トークンを28%削減する効果が確認された。
Abstract
LLM-powered agents often reason from scratch when presented with a new problem instance and lack automatic mechanisms to transfer learned skills to other agents. We propose a federated learning-like framework, Federation over Text (FoT), that enables multiple agents solving different tasks to collectively generate a shared library of metacognitive insights by iteratively federating their local reasoning processes. Instead of federation over gradients (e.g., as in distributed training), FoT operates at the semantic level without any gradient optimization or supervision signal. Iteratively, each agent does local thinking and self-improvement on their specific tasks independently, and shares reasoning traces with a central server, which aggregates and distills them into a cross-task (and cross-domain) insight library that existing and future agents can leverage to improve performance on related tasks. Experiments show that FoT improves reasoning effectiveness and efficiency across a wide range of challenging applications, including mathematical problem solving, cross-domain collaboration, and machine learning research insight discovery. Specifically, it improves average accuracies of downstream tasks by 24% while reducing the reasoning tokens by 28% across the first two applications. In the research insight discovery application, FoT is able to generate insights that cover over 90% of the major contributions in the subsequent papers.
Paper AI Chat
この論文のPDF全文を対象にAIに質問できます。
質問の例: