AIDB Daily Papers
個人の好みに合わせたLLM評価:パーソナライズドベンチマークの提案
※ 日本語タイトル・ポイントはAIによる自動生成です。正確な内容は原論文をご確認ください。
ポイント
- 個々のユーザーの好みを反映したLLM評価手法を提案し、従来の平均化された評価の限界を指摘した。
- ユーザーの興味や文章スタイルがLLMの評価に影響を与えることを分析し、パーソナライズド評価の重要性を示した。
- 個人の好みに合わせたLLMランキング予測が可能であることを実証し、今後のLLM開発への示唆を与えた。
Abstract
With the rise in capabilities of large language models (LLMs) and their deployment in real-world tasks, evaluating LLM alignment with human preferences has become an important challenge. Current benchmarks average preferences across all users to compute aggregate ratings, overlooking individual user preferences when establishing model rankings. Since users have varying preferences in different contexts, we call for personalized LLM benchmarks that rank models according to individual needs. We compute personalized model rankings using ELO ratings and Bradley-Terry coefficients for 115 active Chatbot Arena users and analyze how user query characteristics (topics and writing style) relate to LLM ranking variations. We demonstrate that individual rankings of LLM models diverge dramatically from aggregate LLM rankings, with Bradley-Terry correlations averaging only $ρ= 0.04$ (57% of users show near-zero or negative correlation) and ELO ratings showing moderate correlation ($ρ= 0.43$). Through topic modeling and style analysis, we find users exhibit substantial heterogeneity in topical interests and communication styles, influencing their model preferences. We further show that a compact combination of topic and style features provides a useful feature space for predicting user-specific model rankings. Our results provide strong quantitative evidence that aggregate benchmarks fail to capture individual preferences for most users, and highlight the importance of developing personalized benchmarks that rank LLM models according to individual user preferences.
Paper AI Chat
この論文のPDF全文を対象にAIに質問できます。
質問の例: