AIDB Daily Papers
LLMの認知能力を神経心理学的に評価:新たなベンチマーク「NeuroCognition」
※ 日本語タイトル・ポイントはAIによる自動生成です。正確な内容は原論文をご確認ください。
ポイント
- 大規模言語モデル(LLM)の認知能力を、神経心理学テストに基づいた新しいベンチマーク「NeuroCognition」で評価した。
- 既存のベンチマークがタスクの完了に偏っているのに対し、基礎的な認知能力を測ることで、LLMの弱点を明らかにする点が重要である。
- LLMはテキストでは高い性能を示すものの、画像や複雑性の増加に伴い性能が低下し、人間の様な認知戦略が有効である事が示された。
Abstract
Large language models (LLMs) exhibit a unified "general factor" of capability across 10 benchmarks, a finding confirmed by our factor analysis of 156 models, yet they still struggle with simple, trivial tasks for humans. This is because current benchmarks focus on task completion, failing to probe the foundational cognitive abilities that highlight these behaviors. We address this by introducing the NeuroCognition benchmark, grounded in three adapted neuropsychological tests: Raven's Progressive Matrices (abstract relational reasoning), Spatial Working Memory (maintenance and systematic search), and the Wisconsin Card Sorting Test (cognitive flexibility). Our evaluation reveals that while models perform strongly on text, their performance degrades for images and with increased complexity. Furthermore, we observe that complex reasoning is not universally beneficial, whereas simple, human-like strategies yield partial gains. We also find that NeuroCognition correlates positively with standard general-capability benchmarks, while still measuring distinct cognitive abilities beyond them. Overall, NeuroCognition emphasizes where current LLMs align with human-like intelligence and where they lack core adaptive cognition, showing the potential to serve as a verifiable, scalable source for improving LLMs.
Paper AI Chat
この論文のPDF全文を対象にAIに質問できます。
質問の例: