AIDB Daily Papers
LLMの誤謬:AI支援による認知ワークフローにおける誤った帰属
※ 日本語タイトル・ポイントはAIによる自動生成です。正確な内容は原論文をご確認ください。
ポイント
- 本研究では、LLM支援による成果を自身の能力によるものと誤解する「LLMの誤謬」という認知バイアスを提唱した。
- LLMの透明性の欠如や流暢さが、人間とAIの貢献の境界を曖昧にし、能力の誤った認識につながる点が重要である。
- 教育、雇用、AIリテラシーへの影響を考察し、LLMが認知能力だけでなく自己認識も変える可能性を示唆した。
Abstract
The rapid integration of large language models (LLMs) into everyday workflows has transformed how individuals perform cognitive tasks such as writing, programming, analysis, and multilingual communication. While prior research has focused on model reliability, hallucination, and user trust calibration, less attention has been given to how LLM usage reshapes users' perceptions of their own capabilities. This paper introduces the LLM fallacy, a cognitive attribution error in which individuals misinterpret LLM-assisted outputs as evidence of their own independent competence, producing a systematic divergence between perceived and actual capability. We argue that the opacity, fluency, and low-friction interaction patterns of LLMs obscure the boundary between human and machine contribution, leading users to infer competence from outputs rather than from the processes that generate them. We situate the LLM fallacy within existing literature on automation bias, cognitive offloading, and human--AI collaboration, while distinguishing it as a form of attributional distortion specific to AI-mediated workflows. We propose a conceptual framework of its underlying mechanisms and a typology of manifestations across computational, linguistic, analytical, and creative domains. Finally, we examine implications for education, hiring, and AI literacy, and outline directions for empirical validation. We also provide a transparent account of human--AI collaborative methodology. This work establishes a foundation for understanding how generative AI systems not only augment cognitive performance but also reshape self-perception and perceived expertise.
Paper AI Chat
この論文のPDF全文を対象にAIに質問できます。
質問の例: