AIDB Daily Papers
FINEST:きめ細かい評価によるLLMのセンシティブな話題への応答改善
※ 日本語タイトル・ポイントはAIによる自動生成です。正確な内容は原論文をご確認ください。
ポイント
- 大規模言語モデル(LLM)のセンシティブな話題への応答における安全性と有用性の両立を目指した研究。
- 既存の評価フレームワークの弱点を克服し、Content、Logic、Appropriatenessの3つのカテゴリで詳細な評価分類を導入。
- FINESTに基づいた改善パイプラインにより、モデル応答が大幅に改善、特にスコアベースの改善が有効であることが示された。
Abstract
Large Language Models (LLMs) often generate overly cautious and vague responses on sensitive topics, sacrificing helpfulness for safety. Existing evaluation frameworks lack systematic methods to identify and address specific weaknesses in responses to sensitive topics, making it difficult to improve both safety and helpfulness simultaneously. To address this, we introduce FINEST, a FINE-grained response evaluation taxonomy for Sensitive Topics, which breaks down helpfulness and harmlessness into errors across three main categories: Content, Logic, and Appropriateness. Experiments on a Korean-sensitive question dataset demonstrate that our score- and error-based improvement pipeline, guided by FINEST, significantly improves the model responses across all three categories, outperforming refinement without guidance. Notably, score-based improvement -- providing category-specific scores and justifications -- yields the most significant gains, reducing the error sentence ratio for Appropriateness by up to 33.09%. This work lays the foundation for a more explainable and comprehensive evaluation and improvement of LLM responses to sensitive questions.
Paper AI Chat
この論文のPDF全文を対象にAIに質問できます。
質問の例: