AIDB Daily Papers
CogInstrument:計画タスクにおける双方向の人間-LLMアラインメントのための認知プロセスモデリング
※ 日本語タイトル・ポイントはAIによる自動生成です。正確な内容は原論文をご確認ください。
ポイント
- CogInstrumentは、人間の意思決定における因果関係と修正可能な前提を考慮した、認知モチーフによるユーザーの推論表現システムである。
- 従来のLLMインターフェースが抱える、ユーザーの推論構造を外部化できない認知的なずれを解消し、構造的な制御を可能にする点が新しい。
- 実験の結果、CogInstrumentはLLMの論理的根拠の検証を促し、ユーザーの主体性、信頼、構造的制御を大幅に向上させた。
Abstract
Although Large Language Models (LLMs) demonstrate proficiency in knowledge-intensive tasks, current interfaces frequently precipitate cognitive misalignment by failing to externalize users' underlying reasoning structures. Existing tools typically represent intent as "flat lists," thereby disregarding the causal dependencies and revisable assumptions inherent in human decision-making. We introduce CogInstrument, a system that represents user reasoning through cognitive motifs-compositional, revisable units comprising concepts linked by causal dependencies. CogInstrument extracts these motifs from natural language interactions and renders them as editable graphical structures to facilitate bidirectional alignment. This structural externalization enables both the user and the LLM to inspect, negotiate, and reconcile reasoning processes iteratively. A within-subjects study (N=12) demonstrates that CogInstrument explicitly surfaces implicit reasoning structures, facilitating more targeted revision and reusability over conventional LLM-based dialogue interfaces. By enabling users to verify the logical grounding of LLM outputs, CogInstrument significantly enhances user agency, trust, and structural control over the collaboration. This work formalizes cognitive motifs as a fundamental unit for human-LLM alignment, providing a novel framework for achieving structured, reasoning-based human-AI collaboration.
Paper AI Chat
この論文のPDF全文を対象にAIに質問できます。
質問の例: