AIDB Daily Papers
次期職業推薦における推論の根拠:LLMの性能向上を目指して
※ 日本語タイトル・ポイントはAIによる自動生成です。正確な内容は原論文をご確認ください。
ポイント
- 過去の学歴・職歴からユーザーの嗜好を要約する「理由」を生成し、それを基に次期職業を推薦する手法を開発した。
- LLMをキャリアパスや職業決定の背後にある未観測の理由に適合させるため、高品質な「理由」を生成し、LLMをファインチューニングした。
- 提案手法は、次期職業予測の精度を大幅に向上させ、理由の質が予測精度に依存することを発見した。
Abstract
In this work, we develop a novel reasoning approach to enhance the performance of large language models (LLMs) in future occupation prediction. In this approach, a reason generator first derives a ``reason'' for a user using his/her past education and career history. The reason summarizes the user's preference and is used as the input of an occupation predictor to recommend the user's next occupation. This two-step occupation prediction approach is, however, non-trivial as LLMs are not aligned with career paths or the unobserved reasons behind each occupation decision. We therefore propose to fine-tune LLMs improving their reasoning and occupation prediction performance. We first derive high-quality oracle reasons, as measured by factuality, coherence and utility criteria, using a LLM-as-a-Judge. These oracle reasons are then used to fine-tune small LLMs to perform reason generation and next occupation prediction. Our extensive experiments show that: (a) our approach effectively enhances LLM's accuracy in next occupation prediction making them comparable to fully supervised methods and outperforming unsupervised methods; (b) a single LLM fine-tuned to perform reason generation and occupation prediction outperforms two LLMs fine-tuned to perform the tasks separately; and (c) the next occupation prediction accuracy depends on the quality of generated reasons. Our code is available at https://github.com/Sarasarahhhhh/job_prediction.
Paper AI Chat
この論文のPDF全文を対象にAIに質問できます。
質問の例: