AIDB Daily Papers
AIアナリスト多人数によるデータ分析:エージェント型データサイエンスの多様性の探求
※ 日本語タイトル・ポイントはAIによる自動生成です。正確な内容は原論文をご確認ください。
ポイント
- 大規模言語モデル(LLM)を基盤とする自律型AIアナリストを用いて、データ分析における多様性を低コストで再現する研究を行った。
- 従来の研究では困難だった、同一データに対する多様な分析アプローチの影響を、AIアナリストによって大規模かつ効率的に評価できる点が新しい。
- 実験の結果、AIアナリストの分析結果は効果量やp値に大きなばらつきを示し、仮説の支持に関する判断が大きく変動することが判明した。
Abstract
The conclusions of empirical research depend not only on data but on a sequence of analytic decisions that published results seldom make explicit. Past ``many-analyst" studies have demonstrated this: independent teams testing the same hypothesis on the same dataset regularly reach conflicting conclusions. But such studies require months of coordination among dozens of research groups and are therefore rarely conducted. In this work, we show that fully autonomous AI analysts built on large language models (LLMs) can reproduce a similar structured analytic diversity cheaply and at scale. We task these AI analysts with testing a pre-specified hypothesis on a fixed dataset, varying the underlying model and prompt framing across replicate runs. Each AI analyst independently constructs and executes a full analysis pipeline; an AI auditor then screens each run for methodological validity. Across three datasets spanning experimental and observational designs, AI analyst-produced analyses display wide dispersion in effect sizes, $p$-values, and binary decisions on supporting the hypothesis or not, frequently reversing whether a hypothesis is judged supported. This dispersion is structured: recognizable analytic choices in preprocessing, model specification, and inference differ systematically across LLM and persona conditions. Critically, the effects are \emph{steerable}: reassigning the analyst persona or LLM shifts the distribution of outcomes even after excluding methodologically deficient runs.
Paper AI Chat
この論文のPDF全文を対象にAIに質問できます。
質問の例: