AIDB Daily Papers
エージェント型データサイエンスの健全性チェック:AIは本当に信頼できるのか?
※ 日本語タイトル・ポイントはAIによる自動生成です。正確な内容は原論文をご確認ください。
ポイント
- 本研究では、データ分析AIが誤った結論を出す問題に対し、健全性チェックという検証手法を提案した。
- この手法は、データに微小な変化を加え、AIがノイズに惑わされず安定した結果を出せるかを検証する点で重要である。
- 実験の結果、提案手法はAIの信頼性を評価できることを確認、実際のデータセットでAIの結論の妥当性を評価した。
Abstract
Agentic data science (ADS) pipelines have grown rapidly in both capability and adoption, with systems such as OpenAI Codex now able to directly analyze datasets and produce answers to statistical questions. However, these systems can reach falsely optimistic conclusions that are difficult for users to detect. To address this, we propose a pair of lightweight sanity checks grounded in the Predictability-Computability-Stability (PCS) framework for veridical data science. These checks use reasonable perturbations to screen whether an agent can reliably distinguish signal from noise, acting as a falsifiability constraint that can expose affirmative conclusions as unsupported. Together, the two checks characterize the trustworthiness of an ADS output, e.g. whether it has found stable signal, is responding to noise, or is sensitive to incidental aspects of the input. We validate the approach on synthetic data with controlled signal-to-noise ratios, confirming that the sanity checks track ground-truth signal strength. We then demonstrate the checks on 11 real-world datasets using OpenAI Codex, characterizing the trustworthiness of each conclusion and finding that in 6 of the datasets an affirmative conclusion is not well-supported, even though a single ADS run may support one. We further analyze failure modes of ADS systems and find that ADS self-reported confidence is poorly calibrated to the empirical stability of its conclusions.
Paper AI Chat
この論文のPDF全文を対象にAIに質問できます。
質問の例: