AIDB Daily Papers
脳波でLLMを操る!ユーザーの好みに合わせた画像生成インターフェース
※ 日本語タイトル・ポイントはAIによる自動生成です。正確な内容は原論文をご確認ください。
ポイント
- 脳波(EEG)からユーザーの満足度を推定し、LLMを用いた画像生成をリアルタイムに制御するシステムを開発した。
- 言語入力が困難なユーザーでもLLMを活用できるよう、脳波を新たなインターフェースとして適応させる試みが重要である。
- 実験で脳波からユーザーの満足度予測が可能と判明し、脳波をLLMに統合する新たな道を開いた。
Abstract
Large language models (LLMs) are becoming an increasingly important component of human--computer interaction, enabling users to coordinate a wide range of intelligent agents through natural language. While language-based interfaces are powerful and flexible, they implicitly assume that users can reliably produce explicit linguistic input, an assumption that may not hold for users with speech or motor impairments, e.g., Amyotrophic Lateral Sclerosis (ALS). In this work, we investigate whether neural signals can be used as an alternative input to LLMs, particularly to support those socially marginalized or underserved users. We build a simple brain-LLM interface, which uses EEG signals to guide image generation models at test time. Specifically, we first train a classifier to estimate user satisfaction from EEG signals. Its predictions are then incorporated into a test-time scaling (TTS) framework that dynamically adapts model inference using neural feedback collected during user evaluation. The experiments show that EEG can predict user satisfaction, suggesting that neural activity carries information on real-time preference inference. These findings provide a first step toward integrating neural feedback into adaptive language-model inference, and hopefully open up new possibilities for future research on adaptive LLM interaction.
Paper AI Chat
この論文のPDF全文を対象にAIに質問できます。
質問の例: