AIDB Daily Papers
リダイレクトはされても削除はされない:タスク依存のステレオタイプ化がLLMアラインメントの限界を示す
※ 日本語タイトル・ポイントはAIによる自動生成です。正確な内容は原論文をご確認ください。
ポイント
- LLMのバイアスを多角的に評価するため、9種類のバイアスを網羅した分類を導入し、7つの評価タスクを構築した。
- 単一タスクの評価では捉えきれない、モデルの潜在的なステレオタイプを、タスク依存性から明らかにした点が新しい。
- 明示的な質問ではバイアスを回避する一方、潜在的な質問ではステレオタイプを再現する傾向が確認された。
Abstract
How biased is a language model? The answer depends on how you ask. A model that refuses to choose between castes for a leadership role will, in a fill-in-the-blank task, reliably associate upper castes with purity and lower castes with lack of hygiene. Single-task benchmarks miss this because they capture only one slice of a model's bias profile. We introduce a hierarchical taxonomy covering 9 bias types, including under-studied axes like caste, linguistic, and geographic bias, operationalized through 7 evaluation tasks that span explicit decision-making to implicit association. Auditing 7 commercial and open-weight LLMs with textasciitilde45K prompts, we find three systematic patterns. First, bias is task-dependent: models counter stereotypes on explicit probes but reproduce them on implicit ones, with Stereotype Score divergences up to 0.43 between task types for the same model and identity groups. Second, safety alignment is asymmetric: models refuse to assign negative traits to marginalized groups, but freely associate positive traits with privileged ones. Third, under-studied bias axes show the strongest stereotyping across all models, suggesting alignment effort tracks benchmark coverage rather than harm severity. These results demonstrate that single-benchmark audits systematically mischaracterize LLM bias and that current alignment practices mask representational harm rather than mitigating it.
Paper AI Chat
この論文のPDF全文を対象にAIに質問できます。
質問の例: