AIDB Daily Papers
大規模言語モデルにおける感情表現の潜在構造:透明性と安全性の新たな視点
※ 日本語タイトル・ポイントはAIによる自動生成です。正確な内容は原論文をご確認ください。
ポイント
- 大規模言語モデル(LLM)が、心理学の感情価-覚醒モデルと一致する感情の潜在表現を獲得していることを示した。
- LLMの感情表現は非線形だが、線形近似が可能であり、モデル透明性手法で仮定される線形表現仮説を支持する。
- 学習された潜在表現空間を活用し、感情処理タスクにおける不確実性を定量化できることを実証した。
Abstract
The geometric structure of latent representations in large language models (LLMs) is an active area of research, driven in part by its implications for model transparency and AI safety. Existing literature has focused mainly on general geometric and topological properties of the learnt representations, but due to a lack of ground-truth latent geometry, validating the findings of such approaches is challenging. Emotion processing provides an intriguing testbed for probing representational geometry, as emotions exhibit both categorical organization and continuous affective dimensions, which are well-established in the psychology literature. Moreover, understanding such representations carries safety relevance. In this work, we investigate the latent structure of affective representations in LLMs using geometric data analysis tools. We present three main findings. First, we show that LLMs learn coherent latent representations of affective emotions that align with widely used valence--arousal models from psychology. Second, we find that these representations exhibit nonlinear geometric structure that can nonetheless be well-approximated linearly, providing empirical support for the linear representation hypothesis commonly assumed in model transparency methods. Third, we demonstrate that the learned latent representation space can be leveraged to quantify uncertainty in emotion processing tasks. Our findings suggest that LLMs acquire affective representations with geometric structure paralleling established models of human emotion, with practical implications for model interpretability and safety.
Paper AI Chat
この論文のPDF全文を対象にAIに質問できます。
質問の例: