AIDB Daily Papers
AI言説における戦略的意味の多義性:言語、誇大広告、権力の哲学的分析
※ 日本語タイトル・ポイントはAIによる自動生成です。正確な内容は原論文をご確認ください。
ポイント
- 本研究は、AI分野で「幻覚」などの比喩的・俗語的な用語が戦略的に多義性をもって使われる実態を分析した。
- この「グロスライティング」と呼ばれる手法は、技術的定義と直感的・擬人化された意味を同時に持ち、AIへの投資や認識に影響を与えている。
- 言語の戦略的意味の多義性は、AIの誇大広告を助長し、倫理的・認識論的な検証を回避する社会技術的なメカニズムとして機能する。
Abstract
This paper examines the strategic use of language in contemporary artificial intelligence (AI) discourse, focusing on the widespread adoption of metaphorical or colloquial terms like "hallucination", "chain-of-thought", "introspection", "language model", "alignment", and "agent". We argue that many such terms exhibit strategic polysemy: they sustain multiple interpretations simultaneously, combining narrow technical definitions with broader anthropomorphic or common-sense associations. In contemporary AI research and deployment contexts, this semantic flexibility produces significant institutional and discursive effects, shaping how AI systems are understood by researchers, policymakers, funders, and the public. To analyse this phenomenon, we introduce the concept of glosslighting: the practice of using technically redefined terms to evoke intuitive -- often anthropomorphic or misleading -- associations while preserving plausible deniability through restricted technical definitions. Glosslighting enables actors to benefit from the persuasive force of familiar language while maintaining the ability to retreat to narrower definitions when challenged. We argue that this practice contributes to AI hype cycles, facilitates the mobilisation of investment and institutional support, and influences public and policy perceptions of AI systems, while often deflecting epistemic and ethical scrutiny. By examining the linguistic dynamics of glosslighting and strategic polysemy, the paper highlights how language itself functions as a sociotechnical mechanism shaping the development and governance of AI.
Paper AI Chat
この論文のPDF全文を対象にAIに質問できます。
質問の例: