AIDB Daily Papers
AIとの協働で人間の責任は増大?:AI誘導型人間責任(AIHR)効果の検証
※ 日本語タイトル・ポイントはAIによる自動生成です。正確な内容は原論文をご確認ください。
ポイント
- AIをチームメイトとして導入した場合の責任所在の曖昧さに着目し、AI支援下での人間の責任配分を調査した。
- AIと協働する人間は、他の人間と協働するよりも責任を重く見なされるAIHR効果を発見し、そのメカニズムを解明した。
- AIHRは、AIが自律性の低い道具と見なされ、人間が責任の所在と認識されることで生じることが示唆された。
Abstract
As organizations increasingly deploy AI as a teammate rather than a standalone tool, morally consequential mistakes often arise from joint human-AI workflows in which causality is ambiguous. We ask how people allocate responsibility in these hybrid-agent settings. Across four experiments (N = 1,801) in an AI-assisted lending context (e.g., discriminatory rejection, irresponsible lending, and low-harm filing errors), participants consistently attributed more responsibility to the human decision maker when the human was paired with AI than when paired with another human (by an average of 10 points on a 0-100 scale across studies). This AI-Induced Human Responsibility (AIHR) effect held across high and low harm scenarios and persisted even where self-serving blame-shifting (when the human in question was the self) would be expected. Process evidence indicates that AIHR is explained by inferences of agent autonomy: AI is seen as a constrained implementer, which makes the human the default locus of discretionary responsibility. Alternative mechanisms (mind perception; self-threat) did not account for the effect. These findings extend research on algorithm aversion, hybrid AI-human organizational behavior and responsibility gaps in technology by showing that AI-human teaming can increase (rather than dilute) human responsibility, with implications for accountability design in AI-enabled organizations.
Paper AI Chat
この論文のPDF全文を対象にAIに質問できます。
質問の例: