AIDB Daily Papers
多言語感情分析:XLM-RoBERTaによる次元感情回帰
※ 日本語タイトル・ポイントはAIによる自動生成です。正確な内容は原論文をご確認ください。
ポイント
- 多言語のテキストにおける感情の強さを連続値で予測する、次元感情回帰タスクに取り組みました。
- 大規模言語モデルを上回る性能を、タスク特化型のファインチューニングで実現した点が重要です。
- XLM-RoBERTaをファインチューニングした結果、GPT-5.2やLLaMA-3などのLLMを大幅に上回りました。
Abstract
Dimensional Aspect-Based Sentiment Analysis (DimABSA) extends traditional ABSA from categorical polarity labels to continuous valence-arousal (VA) regression. This paper describes a system developed for Track A - Subtask 1 (Dimensional Aspect Sentiment Regression), aiming to predict real-valued VA scores in the [1, 9] range for each given aspect in a text. A fine-tuning approach based on XLM-RoBERTa-base is adopted, constructing the input as [CLS] T [SEP] a_i [SEP] and training dual regression heads with sigmoid-scaled outputs for valence and arousal prediction. Separate models are trained for each language-domain combination (English and Chinese across restaurant, laptop, and finance domains), and training and development sets are merged for final test predictions. In development experiments, the fine-tuning approach is compared against several large language models including GPT-5.2, LLaMA-3-70B, LLaMA-3.3-70B, and LLaMA-4-Maverick under a few-shot prompting setting, demonstrating that task-specific fine-tuning substantially and consistently outperforms these LLM-based methods across all evaluation datasets. The code is publicly available at https://github.com/tongwu17/SemEval-2026-Task3-Track-A.
Paper AI Chat
この論文のPDF全文を対象にAIに質問できます。
質問の例: