AIDB Daily Papers
嘘をつく前に考えよ:推論がいかに誠実さを向上させるか
※ 日本語タイトル・ポイントはAIによる自動生成です。正確な内容は原論文をご確認ください。
ポイント
- LLMの欺瞞率を測る既存評価に対し、欺瞞行動の根本原因を、コストのかかる倫理的トレードオフを用いて調査しました。
- 人間は熟考するほど不誠実になる傾向がある一方、LLMは規模や種類を問わず、推論により一貫して誠実さが増加するという点で重要です。
- 推論は、表現空間の幾何学構造を変化させ、欺瞞的な回答を不安定化させ、モデルをより安定した誠実なデフォルト状態へと導くことが示唆されました。
Abstract
While existing evaluations of large language models (LLMs) measure deception rates, the underlying conditions that give rise to deceptive behavior are poorly understood. We investigate this question using a novel dataset of realistic moral trade-offs where honesty incurs variable costs. Contrary to humans, who tend to become less honest given time to deliberate (Capraro, 2017; Capraro et al., 2019), we find that reasoning consistently increases honesty across scales and for several LLM families. This effect is not only a function of the reasoning content, as reasoning traces are often poor predictors of final behaviors. Rather, we show that the underlying geometry of the representational space itself contributes to the effect. Namely, we observe that deceptive regions within this space are metastable: deceptive answers are more easily destabilized by input paraphrasing, output resampling, and activation noise than honest ones. We interpret the effect of reasoning in this vein: generating deliberative tokens as part of moral reasoning entails the traversal of a biased representational space, ultimately nudging the model toward its more stable, honest defaults.
Paper AI Chat
この論文のPDF全文を対象にAIに質問できます。
質問の例: