AIDB Daily Papers
CoMMET:LLMはどこまで心の理論タスクを実行できるのか?
※ 日本語タイトル・ポイントはAIによる自動生成です。正確な内容は原論文をご確認ください。
ポイント
- 人間の社会性の根幹である心の理論をLLMで評価する新たなマルチモーダルベンチマークCoMMETを提案。
- 既存の評価がテキスト入力と信念に偏るのに対し、CoMMETは多様な心の状態と複数ターンの対話を導入し評価範囲を拡大。
- 様々なLLMを評価した結果、現行モデルの強みと限界を分析し、今後の改善の方向性を示唆した。
Abstract
Theory of Mind (ToM)-the ability to reason about the mental states of oneself and others-is a cornerstone of human social intelligence. As Large Language Models (LLMs) become ubiquitous in real-world applications, validating their capacity for this level of social reasoning is essential for effective and natural interactions. However, existing benchmarks for assessing ToM in LLMs are limited; most rely solely on text inputs and focus narrowly on belief-related tasks. In this paper, we propose a new multimodal benchmark dataset, CoMMET, a Comprehensive Mental states and Moral Evaluation Task inspired by the Theory of Mind Booklet Task. CoMMET expands the scope of evaluation by covering a broader range of mental states and introducing multi-turn testing. To the best of our knowledge, this is the first multimodal dataset to evaluate ToM in a multi-turn conversational setting. Through a comprehensive assessment of LLMs across different families and sizes, we analyze the strengths and limitations of current models and identify directions for future improvement. Our work offers a deeper understanding of the social cognitive capabilities of modern LLMs.
Paper AI Chat
この論文のPDF全文を対象にAIに質問できます。
質問の例: