AIDB Daily Papers
説明可能なAI(XAI)を超えて:遅すぎたパラダイムシフトとポストXAIの研究方向性
※ 日本語タイトル・ポイントはAIによる自動生成です。正確な内容は原論文をご確認ください。
ポイント
- 深層ニューラルネットワークと大規模言語モデルにおける説明可能なAI(XAI)の限界を、学際的に検証した。
- XAI研究における根本的な問題点を明らかにし、信頼性と認証に基づいたAI開発へのパラダイムシフトを提唱する。
- 検証重視のインタラクティブAI、AI認識論、ユーザーセンシブルAI、モデル中心解釈性の4つの方向性を示す。
Abstract
This study provides a cross-disciplinary examination of Explainable Artificial Intelligence (XAI) approaches-focusing on deep neural networks (DNNs) and large language models (LLMs)-and identifies empirical and conceptual limitations in current XAI. We discuss critical symptoms that stem from deeper root causes (i.e., two paradoxes, two conceptual confusions, and five false assumptions). These fundamental problems within the current XAI research field reveal three insights: experimentally, XAI exhibits significant flaws; conceptually, it is paradoxical; and pragmatically, further attempts to reform the paradoxical XAI might exacerbate its confusion-demanding fundamental shifts and new research directions. To move beyond XAI's limitations, we propose a four-pronged synthesized paradigm shift toward reliable and certified AI development. These four components include: verification-focused Interactive AI (IAI) to establish scientific community protocols for certifying AI system performance rather than attempting post-hoc explanations, AI Epistemology for rigorous scientific foundations, User-Sensible AI to create context-aware systems tailored to specific user communities, and Model-Centered Interpretability for faithful technical analysis-together offering comprehensive post-XAI research directions.
Paper AI Chat
この論文のPDF全文を対象にAIに質問できます。
質問の例: