AIDB Daily Papers
科学分野におけるLLMの寿命短縮:研究利用の盛衰を解明する
※ 日本語タイトル・ポイントはAIによる自動生成です。正確な内容は原論文をご確認ください。
ポイント
- 科学論文におけるLLMの利用状況を大規模に分析し、モデルの採用と放棄のパターンを明らかにしました。
- モデルのリリース時期が、アーキテクチャや規模よりもライフサイクルに強く影響することを発見し、技術革新の加速を示唆します。
- 科学的な採用は、リリース後に増加し、ピークを迎え、新しいモデルの登場とともに減少する逆U字型を描くことが判明しました。
Abstract
Scaling laws describe how language model capabilities grow with compute and data, but say nothing about how long a model matters once released. We provide the first large-scale empirical account of how scientists adopt and abandon language models over time. We track 62 LLMs across over 108k citing papers (2018-2025), each with at least three years of post-release data, and classify every citation as active adoption or background reference to construct per-model adoption trajectories that raw citation counts cannot resolve. We find three regularities. First, scientific adoption follows an inverted-U trajectory: usage rises after release, peaks, and declines as newer models appear, a pattern we term the textit{scientific adoption curve}. Second, this curve is compressing: each additional release year is associated with a 27% reduction in time-to-peak adoption ($p < 0.001$), robust to minimum-age thresholds and controls for model size. Third, release timing dominates model-level attributes as a predictor of lifecycle dynamics. Release year explains both time-to-peak and scientific lifespan more strongly than architecture, openness, or scale, though model size and access modality retain modest predictive power for total adoption volume. Together, these findings complement scaling laws with adoption-side regularities and suggest that the forces driving rapid capability progress may be the same forces compressing scientific relevance.
Paper AI Chat
この論文のPDF全文を対象にAIに質問できます。
質問の例: