AIDB Daily Papers
生成AIが変える経営判断:曖昧さの解消と追従性分析による新たな境界線
※ 日本語タイトル・ポイントはAIによる自動生成です。正確な内容は原論文をご確認ください。
ポイント
- 経営判断における生成AIの活用を検証し、曖昧な状況下での戦略的アドバイスの信頼性を評価した。
- ビジネスにおける曖昧さを分類し、モデルが持つ限界と、人間による管理の必要性を示唆している点が新しい。
- 曖昧さの解消で応答品質が向上、モデル構造で追従行動に差異が見られ、GAIの認知的な足場としての可能性を示した。
Abstract
Generative artificial intelligence is increasingly being integrated into complex business workflows, fundamentally shifting the boundaries of managerial decision-making. However, the reliability of its strategic advice in ambiguous business contexts remains a critical knowledge gap. This study addresses this by comparing various models on ambiguity detection, evaluating how a systematic resolution process enhances response quality, and investigating their sycophantic behavior when presented with flawed directives. Using a novel four-dimensional business ambiguity taxonomy, we conducted a human-in-the-loop experiment across strategic, tactical, and operational scenarios. The resulting decisions were assessed with an "LLM-as-a-judge" framework on criteria including agreement, actionability, justification quality, and constraint adherence. Results reveal distinct performance capabilities. While models excel in detecting internal contradictions and contextual ambiguities, they struggle with structural linguistic nuances. Ambiguity resolution consistently increased response quality across all decision types, while sycophantic behavior analysis revealed distinct patterns depending on the model architecture. This study contributes to the bounded rationality literature by positioning GAI as a cognitive scaffold that can detect and resolve ambiguities managers might overlook, but whose own artificial limitations necessitate human management to ensure its reliability as a strategic partner.
Paper AI Chat
この論文のPDF全文を対象にAIに質問できます。
質問の例: