AIDB Daily Papers
MOSS-VoiceGenerator:自然言語記述でリアルな音声を生成
※ 日本語タイトル・ポイントはAIによる自動生成です。正確な内容は原論文をご確認ください。
ポイント
- 自然言語による音声デザインに基づき、テキスト記述から直接話者の音色を生成するモデルを開発しました。
- 映画コンテンツ由来の大規模な表現力豊かな音声データで学習し、現実世界の音響変動への対応力を高めた点が新しいです。
- 主観評価の結果、既存の音声デザインモデルと比較して、全体的な性能、指示への追従性、自然さにおいて優位性を示しました。
Abstract
Voice design from natural language aims to generate speaker timbres directly from free-form textual descriptions, allowing users to create voices tailored to specific roles, personalities, and emotions. Such controllable voice creation benefits a wide range of downstream applications-including storytelling, game dubbing, role-play agents, and conversational assistants, making it a significant task for modern Text-to-Speech models. However, existing models are largely trained on carefully recorded studio data, which produces speech that is clean and well-articulated, yet lacks the lived-in qualities of real human voices. To address these limitations, we present MOSS-VoiceGenerator, an open-source instruction-driven voice generation model that creates new timbres directly from natural language prompts. Motivated by the hypothesis that exposure to real-world acoustic variation produces more perceptually natural voices, we train on large-scale expressive speech data sourced from cinematic content. Subjective preference studies demonstrate its superiority in overall performance, instruction-following, and naturalness compared to other voice design models.
Paper AI Chat
この論文のPDF全文を対象にAIに質問できます。
質問の例: