AIDB Daily Papers
知識偏重のAI:LLMと意図された影響とのずれを測定
※ 日本語タイトル・ポイントはAIによる自動生成です。正確な内容は原論文をご確認ください。
ポイント
- 大規模言語モデル(LLM)の教育利用における性能を、人間の専門家と比較し評価しました。
- 既存のAIベンチマークで高い性能を示すLLMが、教育現場で必ずしも良い結果をもたらさないという問題提起が重要です。
- LLM間の共通バイアスが教育の質と負の相関を示し、モデルの集合知もずれを悪化させることを発見しました。
Abstract
LLMs increasingly excel on AI benchmarks, but doing so does not guarantee validity for downstream tasks. This study evaluates the performance of leading foundation models (FMs, i.e., generative pre-trained base LLMs) with out-of-distribution (OOD) tasks of the teaching and learning of schoolchildren. Across all FMs, inter-model behaviors on disparate tasks correlate higher than they do with expert human behaviors on target tasks. These biases shared across LLMs are poorly aligned with downstream measures of teaching quality and often textit{negatively aligned with learning outcomes}. Further, we find multi-model ensembles, both unanimous model voting and expert-weighting by benchmark performance, further exacerbate misalignment with learning. We measure that 50% of the variation in misalignment error is shared across foundation models, suggesting that common pretraining accounts for much of the misalignment in these tasks. We demonstrate methods for robustly measuring alignment of complex tasks and provide unique insights into both educational applications of foundation models and to understanding limitations of models.
Paper AI Chat
この論文のPDF全文を対象にAIに質問できます。
質問の例: