AIDB Daily Papers
LLM同士のニューラル同調:AIは社会性を獲得できるか?
※ 日本語タイトル・ポイントはAIによる自動生成です。正確な内容は原論文をご確認ください。
ポイント
- 複数LLMが相互作用する際のニューラル同調を分析し、LLMの社会性を評価する新しい指標を導入した。
- 社会シミュレーション中のニューラル同調は、LLMの社会的関与と時間的整合性を反映し、社会性の代理指標として機能する。
- LLM間のニューラル同調は社会的パフォーマンスと強く相関し、人間とLLMの社会interactionにおける内部ダイナミクスの類似性を示唆する。
Abstract
Neuroscience has uncovered a fundamental mechanism of our social nature: human brain activity becomes synchronized with others in many social contexts involving interaction. Traditionally, social minds have been regarded as an exclusive property of living beings. Although large language models (LLMs) are widely accepted as powerful approximations of human behavior, with multi-LLM system being extensively explored to enhance their capabilities, it remains controversial whether they can be meaningfully compared to human social minds. In this work, we explore neural synchrony between socially interacting LLMs as an empirical evidence for this debate. Specifically, we introduce neural synchrony during social simulations as a novel proxy for analyzing the sociality of LLMs at the representational level. Through carefully designed experiments, we demonstrate that it reliably reflects both social engagement and temporal alignment in their interactions. Our findings indicate that neural synchrony between LLMs is strongly correlated with their social performance, highlighting an important link between neural synchrony and the social behaviors of LLMs. Our work offers a new perspective to examine the "social minds" of LLMs, highlighting surprising parallels in the internal dynamics that underlie human and LLM social interaction.
Paper AI Chat
この論文のPDF全文を対象にAIに質問できます。
質問の例: