AIDB Daily Papers
LLMの多言語・多文化能力を実世界タスクで測るベンチマーク「CulturALL」
※ 日本語タイトル・ポイントはAIによる自動生成です。正確な内容は原論文をご確認ください。
ポイント
- 実世界に根差したタスクにおけるLLMの多言語・多文化能力を評価するベンチマーク「CulturALL」を開発した。
- 既存のベンチマークが不足していた、文脈豊かな実世界シナリオでの推論能力を測る点で重要かつ新しい。
- 14言語・51地域・16トピックにわたる2,610サンプルで構成され、最高精度は44.48%と改善の余地が大きいことが示された。
Abstract
Large language models (LLMs) are now deployed worldwide, inspiring a surge of benchmarks that measure their multilingual and multicultural abilities. However, these benchmarks prioritize generic language understanding or superficial cultural trivia, leaving the evaluation of grounded tasks -- where models must reason within real-world, context-rich scenarios -- largely unaddressed. To fill this gap, we present CulturALL, a comprehensive and challenging benchmark to assess LLMs' multilingual and multicultural competence on grounded tasks. CulturALL is built via a human--AI collaborative framework: expert annotators ensure appropriate difficulty and factual accuracy, while LLMs lighten the manual workload. By incorporating diverse sources, CulturALL ensures comprehensive scenario coverage. Each item is carefully designed to present a high level of difficulty, making CulturALL challenging. CulturALL contains 2,610 samples in 14 languages from 51 regions, distributed across 16 topics to capture the full breadth of grounded tasks. Experiments show that the best LLM achieves 44.48% accuracy on CulturALL, underscoring substantial room for improvement.
Paper AI Chat
この論文のPDF全文を対象にAIに質問できます。
質問の例: