AIDB Daily Papers
CogBias:大規模言語モデルにおける認知バイアスの測定と軽減
※ 日本語タイトル・ポイントはAIによる自動生成です。正確な内容は原論文をご確認ください。
ポイント
- LLMの認知バイアスを測定するベンチマークLLM CogBiasを構築し、判断、情報処理、社会、応答の4つのバイアスを評価。
- LLMの認知バイアスが内部表現としてエンコードされていることを示し、特定介入による軽減の可能性を示唆する点が新しい。
- 活性化ステアリングによりバイアスを26~32%削減し、下流タスクの性能を維持。モデル間で機能構造の共通性を示唆。
Abstract
Large Language Models (LLMs) are increasingly deployed in high-stakes decision-making contexts. While prior work has shown that LLMs exhibit cognitive biases behaviorally, whether these biases correspond to identifiable internal representations and can be mitigated through targeted intervention remains an open question. We define LLM cognitive bias as systematic, reproducible deviations from correct answers in tasks with computable ground-truth baselines, and introduce LLM CogBias, a benchmark organized around four families of cognitive biases: Judgment, Information Processing, Social, and Response. We evaluate three LLMs and find that cognitive biases emerge systematically across all four families, with magnitudes and debiasing responses that are strongly family-dependent: prompt-level debiasing substantially reduces Response biases but backfires for Judgment biases. Using linear probes under a contrastive design, we show that these biases are encoded as linearly separable directions in model activation space. Finally, we apply activation steering to modulate biased behavior, achieving 26--32% reduction in bias score (fraction of biased responses) while preserving downstream capability on 25 benchmarks (Llama: negligible degradation; Qwen: up to $-$19.0pp for Judgment biases). Despite near-orthogonal bias representations across models (mean cosine similarity 0.01), steering reduces bias at similar rates across architectures ($r(246)$=.621, $p$<.001), suggesting shared functional organization.
Paper AI Chat
この論文のPDF全文を対象にAIに質問できます。
質問の例: