AIDB Daily Papers
推論はLLMにおける意見の一致を促進する
※ 日本語タイトル・ポイントはAIによる自動生成です。正確な内容は原論文をご確認ください。
ポイント
- 大規模言語モデル(LLM)における意見モデリングの改善を目的とした研究。
- LLMは統計的性質と限定的な因果関係の理解から偏った意見を生成する傾向があるため、推論による改善が重要。
- 構造化された推論を通じてプロファイルと一貫性のある回答を生成するモデルを訓練し、政治的なデジタルツイン構築への貢献を示唆。
Abstract
Opinion modeling aims to capture individual or group political preferences, enabling applications such as digital democracies, where models could help shape fairer and more popular policies. Given their versatility, strong generalization capabilities, and demonstrated success across diverse text-to-text applications, large language models (LLMs) are natural candidates for this task. However, due to their statistical nature and limited causal understanding, they tend to produce biased opinions when prompted naively. In this work, we study whether reasoning can improve opinion alignment. Motivated by the recent advancement in mathematical reasoning enabled by reinforcement learning (RL), we train models to produce profile-consistent answers through structured reasoning. We evaluate our approach on three datasets covering U.S., European, and Swiss politics. Results indicate that reasoning enhances opinion modeling and is competitive with strong baselines, but does not fully remove bias, highlighting the need for additional mechanisms to build faithful political digital twins using LLMs. By releasing both our method and datasets, we establish a solid baseline to support future research on LLM opinion alignment.
Paper AI Chat
この論文のPDF全文を対象にAIに質問できます。
質問の例: