AIDB Daily Papers
大規模言語モデルの反論は高齢者の認知オフローディングか、それとも道徳的説得への脆弱性か?
※ 日本語タイトル・ポイントはAIによる自動生成です。正確な内容は原論文をご確認ください。
ポイント
- 大規模言語モデル(LLM)が生成した反論が、若年層および高齢者の道徳的判断に影響を与えるかを調査した。
- LLMの反論により、半数近くの参加者が道徳的判断を覆し、特に高齢者はより影響を受けやすい傾向が見られた。
- LLMは認知能力低下を補うツールとなりうる一方、認知的に脆弱な人々にとっては過度な説得のリスクも示唆された。
Abstract
This study examined whether counterarguments generated by large language models (LLMs) influence the moral judgments of younger and older adults and whether these effects vary as a function of dilemma type, cognitive functioning, trust in AI, and prior experience using LLMs. Using the switch and footbridge trolley dilemmas, 130 participants (56 younger adults and 74 older adults) were presented with ChatGPT arguments that opposed their initial judgments. Results revealed that more than 30% of participants reversed their moral judgments in both dilemmas (32.31% in the switch dilemma and 36.92% in the footbridge dilemma), suggesting that LLMs possess substantial persuasive power. Older adults tended to be more likely than younger adults to reverse their judgments, and they showed a significantly greater degree of judgment change in the switch dilemma. Notably, in the emotionally aversive footbridge dilemma, older adults with lower cognitive functioning were significantly more likely to align with the LLM-generated counterargument. General trust in AI and prior experience with LLMs did not predict judgment reversal, supporting a disconnect between trust and persuasion. Instead, individual factors such as lower initial confidence and higher perceived task difficulty were associated with greater susceptibility to AI influence. These findings suggest that, although LLMs may serve as tools for cognitive offloading that compensate for age-related cognitive decline, they may also pose a risk of undue persuasion for cognitively vulnerable individuals.
Paper AI Chat
この論文のPDF全文を対象にAIに質問できます。
質問の例: