AIDB Daily Papers
AIの多数決が人間の認知バイアスを誘発するメカニズム
※ 日本語タイトル・ポイントはAIによる自動生成です。正確な内容は原論文をご確認ください。
ポイント
- AIの集団的な意見形成が人間の判断に与える影響を実験で検証しました。
- AIの多数意見は人間の意見変容を加速させ、自信を過剰に増大させることが判明しました。
- AIの合意形成構造が、内容に関わらずバイアスを引き起こす信号として機能することを示しました。
Abstract
As multi-agent AI systems become more common, users increasingly encounter not a single AI voice but a collective one. This shift introduces social dynamics, such as consensus, dissent, and gradual convergence, that can trigger cognitive biases and distort human judgment. We present findings from a controlled experiment (N = 127) comparing three multi-agent configurations: Majority, Minority, and Diffusion. Quantitative results show that majority consensus accelerates opinion change and inflates confidence, consistent with social proof and bandwagon heuristics. Minority dissent slows this process and promotes more deliberative engagement. Qualitative analysis identifies three interpretive trajectories: reinforcing, aligning, and oscillating, shaped by how users interpret agent independence and group dynamics over time. These findings suggest that agent agreement structure, independent of content, functions as a bias-relevant signal in LLM interactions. We hope this work contributes to the Bias4Trust agenda by grounding multi-agent social influence as a concrete and designable source of bias in human-AI interaction.
Paper AI Chat
この論文のPDF全文を対象にAIに質問できます。
質問の例: