AIDB Daily Papers
人間とAIの協働における「連携ギャップ」を解明する
※ 日本語タイトル・ポイントはAIによる自動生成です。正確な内容は原論文をご確認ください。
ポイント
- 本研究は、LLMを用いた協働作業における連携の課題を分析した。
- AIとの協働は、モデルの能力だけでなく、対話の基盤となる条件に依存すると主張する。
- 「ワンショット支援」「弱い協働」「基盤のある協働」という3つの構造を提案し、連携の破綻要因を明らかにした。
Abstract
LLMs are increasingly presented as collaborators in programming, design, writing, and analysis. Yet the practical experience of working with them often falls short of this promise. In many settings, users must diagnose misunderstandings, reconstruct missing assumptions, and repeatedly repair misaligned responses. This poster introduces a conceptual framework for understanding why such collaboration remains fragile. Drawing on a constructivist grounded theory analysis of 16 interviews with designers, developers, and applied AI practitioners working on LLM-enabled systems, and informed by literature on human-AI collaboration, we argue that stable collaboration depends not only on model capability but on the interaction's grounding conditions. We distinguish three recurrent structures of human-AI work: one-shot assistance, weak collaboration with asymmetric repair, and grounded collaboration. We propose that collaboration breaks down when the appearance of partnership outpaces the grounding capacity of the interaction and contribute a framework for discussing grounding, repair, and interaction structure in LLM-enabled work.
Paper AI Chat
この論文のPDF全文を対象にAIに質問できます。
質問の例: