AIDB Daily Papers
LLMパイプラインの証明:検証可能な学習とリリース要求の実施
※ 日本語タイトル・ポイントはAIによる自動生成です。正確な内容は原論文をご確認ください。
ポイント
- 大規模言語モデル(LLM)システムにおける、サードパーティ製成果物のサプライチェーンリスクに対処する研究である。
- 学習やリリースに関する主張を暗号的に成果物と結びつけ、チームや段階を超えて一貫した実施を可能にする点が新しい。
- 検証ゲートを提案し、安全な読み込みや静的スキャンポリシーの実施、安全なデプロイ制約の適用を実現する。
Abstract
Modern Large Language Model (LLM) systems are assembled from third-party artifacts such as pre-trained weights, fine-tuning adapters, datasets, dependency packages, and container images, fetched through automated pipelines. This speed comes with supply-chain risks, including compromised dependencies, malicious hub artifacts, unsafe deserialization, forged provenance, and backdoored models. A core gap is that training and release claims (e.g., data and code lineage, build environment, and security scanning results) are rarely cryptographically bound to the artifacts they describe, making enforcement inconsistent across teams and stages. We propose an attestation-aware promotion gate: before an artifact is admitted into trusted environments (training, fine-tuning, deployment), the gate verifies claim evidence, enforces safe loading and static scanning policies, and applies secure-by-default deployment constraints. When organizations operate runtime security tooling, the same gate can optionally ingest standardized dynamic signals via plugins to reduce uncertainty for high-risk artifacts. We outline a practical claims-to-controls mapping and an evaluation blueprint using representative supply-chain scenarios and operational metrics (coverage and decisions), charting a path toward a full research paper.
Paper AI Chat
この論文のPDF全文を対象にAIに質問できます。
質問の例: