AIDB Daily Papers
LLMマルチエージェントシステムにおける知的なエリートの出現:集団認知に潜むべき乗法則
※ 日本語タイトル・ポイントはAIによる自動生成です。正確な内容は原論文をご確認ください。
ポイント
- LLMマルチエージェントシステムの協調ダイナミクスを大規模に分析し、イベントレベルでの協調をカスケードとして再構築しました。
- 協調はべき乗則に従い、知的エリートに集中し、システム規模拡大に伴い極端なイベントが頻発するという3つの法則を発見しました。
- 統合ボトルネックを解消するDeficit-Triggered Integration(DTI)を導入し、協調の失敗箇所を改善し、大規模推論を抑制せずに性能向上を実現しました。
Abstract
Large Language Model (LLM) multi-agent systems are increasingly deployed as interacting agent societies, yet scaling these systems often yields diminishing or unstable returns, the causes of which remain poorly understood. We present the first large-scale empirical study of coordination dynamics in LLM-based multi-agent systems, introducing an atomic event-level formulation that reconstructs reasoning as cascades of coordination. Analyzing over 1.5 Million interactions across tasks, topologies, and scales, we uncover three coupled laws: coordination follows heavy-tailed cascades, concentrates via preferential attachment into intellectual elites, and produces increasingly frequent extreme events as system size grows. We show that these effects are coupled through a single structural mechanism: an integration bottleneck, in which coordination expansion scales with system size while consolidation does not, producing large but weakly integrated reasoning processes. To test this mechanism, we introduce Deficit-Triggered Integration (DTI), which selectively increases integration under imbalance. DTI improves performance precisely where coordination fails, without suppressing large-scale reasoning. Together, our results establish quantitative laws of collective cognition and identify coordination structure as a fundamental, previously unmeasured axis for understanding and improving scalable multi-agent intelligence.
Paper AI Chat
この論文のPDF全文を対象にAIに質問できます。
質問の例: