AIDB Daily Papers
料理のリスクを炙り出す:大規模言語モデルにおける食品安全リスクの評価と軽減
※ 日本語タイトル・ポイントはAIによる自動生成です。正確な内容は原論文をご確認ください。
ポイント
- 大規模言語モデル(LLM)の食品安全に関するリスクを評価するため、FDAガイドラインに基づいたベンチマークFoodGuardBenchを構築した。
- 既存のLLMは、特定の攻撃に対して脆弱で、有害な指示を生成する可能性があり、既存の安全対策では不十分であることが判明した。
- 食品安全に特化した安全対策モデルFoodGuard-4Bを開発し、LLMを食品関連領域のリスクから保護することを目指す。
Abstract
Large language models (LLMs) are increasingly deployed for everyday tasks, including food preparation and health-related guidance. However, food safety remains a high-stakes domain where inaccurate or misleading information can cause severe real-world harm. Despite these risks, current LLMs and safety guardrails lack rigorous alignment tailored to domain-specific food hazards. To address this gap, we introduce FoodGuardBench, the first comprehensive benchmark comprising 3,339 queries grounded in FDA guidelines, designed to evaluate the safety and robustness of LLMs. By constructing a taxonomy of food safety principles and employing representative jailbreak attacks (e.g., AutoDAN and PAP), we systematically evaluate existing LLMs and guardrails. Our evaluation results reveal three critical vulnerabilities: First, current LLMs exhibit sparse safety alignment in the food-related domain, easily succumbing to a few canonical jailbreak strategies. Second, when compromised, LLMs frequently generate actionable yet harmful instructions, inadvertently empowering malicious actors and posing tangible risks. Third, existing LLM-based guardrails systematically overlook these domain-specific threats, failing to detect a substantial volume of malicious inputs. To mitigate these vulnerabilities, we introduce FoodGuard-4B, a specialized guardrail model fine-tuned on our datasets to safeguard LLMs within food-related domains.
Paper AI Chat
この論文のPDF全文を対象にAIに質問できます。
質問の例: