AIDB Daily Papers
同じ世界、異なる捉え方:履歴依存的な知覚再編成を行う人工エージェント
※ 日本語タイトル・ポイントはAIによる自動生成です。正確な内容は原論文をご確認ください。
ポイント
- 人工エージェントが過去の経験に基づいて世界を認識する仕組みを提案した。
- 遅い潜在変数を用いて知覚にフィードバックすることで、過去の経験が知覚に影響を与える。
- このモデルは、知覚の再編成により、行動が安定したまま過去の経験が反映されることを示した。
Abstract
What kind of internal organization would allow an artificial agent not only to adapt its behavior, but to sustain a history-sensitive perspective on its world? I present a minimal architecture in which a slow perspective latent $g$ feeds back into perception and is itself updated through perceptual processing. This allows identical observations to be encoded differently depending on the agent's accumulated stance. The model is evaluated in a minimal gridworld with a fixed spatial scaffold and sensory perturbations. Across analyses, three results emerge: first, perturbation history leaves measurable residue in adaptive plasticity after nominal conditions are restored. Second, the perspective latent reorganizes perceptual encoding, such that identical observations are represented differently depending on prior experience. Third, only adaptive self-modulation yields the characteristic growth-then-stabilization dynamic, unlike rigid or always-open update regimes. Gross behavior remains stable throughout, suggesting that the dominant reorganization is perceptual rather than behavioral. Together, these findings identify a minimal mechanism for history-dependent perspectival organization in artificial agents.
Paper AI Chat
この論文のPDF全文を対象にAIに質問できます。
質問の例: