AIDB Daily Papers
CGMが語りかけたら:連続血糖値データに対するプライバシー保護型質問応答エージェント
※ 日本語タイトル・ポイントはAIによる自動生成です。正確な内容は原論文をご確認ください。
ポイント
- 本研究では、連続血糖値モニター(CGM)のデータを活用し、ユーザーの質問に答えるプライバシー保護型エージェント「CGM-Agent」を開発しました。
- このエージェントは、個人健康データをデバイス外に出さずに、LLMを推論エンジンとして利用することで、プライバシーと精度の懸念を解決します。
- 評価の結果、トップモデルは合成クエリで94%、実世界の曖昧なクエリで88%の精度を達成し、軽量モデルでも実用的な性能を示しました。
Abstract
Continuous glucose monitors (CGMs) used in diabetes care collect rich personal health data that could improve day-to-day self-management. However, current patient platforms only offer static summaries which do not support inquisitive user queries. Large language models (LLMs) could enable free-form inquiries about continuous glucose data, but deploying them over sensitive health records raises privacy and accuracy concerns. In this paper, we present CGM-Agent, a privacy-preserving framework for question answering over personal glucose data. In our design, the LLM serves purely as a reasoning engine that selects analytical functions. All computation occurs locally, and personal health data never leaves the user's device. For evaluation, we construct a benchmark of 4,180 questions combining parameterized question templates with real user queries and ground truth derived from deterministic program execution. Evaluating 6 leading LLMs, we find that top models achieve 94% value accuracy on synthetic queries and 88% on ambiguous real-world queries. Errors stem primarily from intent and temporal ambiguity rather than computational failures. Additionally, lightweight models achieve competitive performance in our agent design, suggesting opportunities for low-cost deployment. We release our code and benchmark to support future work on trustworthy health agents.
Paper AI Chat
この論文のPDF全文を対象にAIに質問できます。
質問の例: