AIDB Daily Papers
単眼腹腔鏡ビデオからの訓練不要なエージェント推論のための4D表現
※ 日本語タイトル・ポイントはAIによる自動生成です。正確な内容は原論文をご確認ください。
ポイント
- 軟部組織手術におけるAI向けに、時空間推論を可能にする4D表現のフレームワークを提案した。
- 手術シーンの空間的複雑さを考慮し、時間と3D空間に基づいた自然言語推論を可能にする点が新しい。
- 4D表現と汎用MLLMの組み合わせにより、時空間的な理解と4Dグラウンディングが大幅に向上した。
Abstract
Spatiotemporal reasoning is a fundamental capability for artificial intelligence (AI) in soft tissue surgery, paving the way for intelligent assistive systems and autonomous robotics. While 2D vision-language models show increasing promise at understanding surgical video, the spatial complexity of surgical scenes suggests that reasoning systems may benefit from explicit 4D representations. Here, we propose a framework for equipping surgical agents with spatiotemporal tools based on an explicit 4D representation, enabling AI systems to ground their natural language reasoning in both time and 3D space. Leveraging models for point tracking, depth, and segmentation, we develop a coherent 4D model with spatiotemporally consistent tool and tissue semantics. A Multimodal Large Language Model (MLLM) then acts as an agent on tools derived from the explicit 4D representation (e.g., trajectories) without any fine-tuning. We evaluate our method on a new dataset of 134 clinically relevant questions and find that the combination of a general purpose reasoning backbone and our 4D representation significantly improves spatiotemporal understanding and allows for 4D grounding. We demonstrate that spatiotemporal intelligence can be "assembled" from 2D MLLMs and 3D computer vision models without additional training. Code, data, and examples are available at https://tum-ai.github.io/surg4d/
Paper AI Chat
この論文のPDF全文を対象にAIに質問できます。
質問の例: