AIDB Daily Papers
BenchBench:AIによるベンチマーク自動生成の性能を測る新たな試金石
※ 日本語タイトル・ポイントはAIによる自動生成です。正確な内容は原論文をご確認ください。
ポイント
- 大規模言語モデル(LLM)の評価における課題を解決するため、ベンチマーク自動生成の性能を評価するBenchBenchを提案。
- 既存の静的なベンチマークの限界を克服し、LLMのバイアスやプロンプトへの依存を軽減する、よりスケーラブルな評価を目指す。
- 多様な分野で16.7Kのアイテムを生成し、厳密な検証と評価を実施。ベンチマーク設計能力と回答能力の相関や、項目品質に関する詳細な分析を提供。
Abstract
Benchmarks are the de facto standard for tracking progress in large language models (LLMs), yet static test sets can rapidly saturate, become vulnerable to contamination, and are costly to refresh. Scalable evaluation of open-ended items often relies on LLM judges, introducing additional sources of bias and prompt sensitivity. We argue that evaluation must extend beyond how well models answer benchmarks to how well models design them. We introduce BenchBench, a three-stage pipeline and dataset for benchmarking automated benchmark generation: (i) extract structured domain cards from seed benchmarks, (ii) prompt multiple designer LLMs to generate quota-controlled suites, and (iii) validate items with a multi-model answerer panel using exact/numeric/symbolic verifiers when possible and rubric-guided judging otherwise, yielding designer--answerer matrices with item-level quality flags and psychometric diagnostics. Across nine variants spanning computer science, mathematics, medicine, and theory-of-mind reasoning (including multilingual and multimodal settings), we generate 16.7K items, retain ~15K core items post-filtering, and produce ~152K graded model--item responses. BenchBench shows that benchmark-design ability is only moderately correlated with answer-time strength (Spearman rho ~0.37), invalidity is negatively associated with discrimination (Pearson r~0.62), and the resulting designer--answerer matrices enable scalable audits of format/modality/language fidelity and suite-dependent self/family interactions. The project is available at: https://github.com/koanatakiyo/BenchBench.
Paper AI Chat
この論文のPDF全文を対象にAIに質問できます。
質問の例: