AIDB Daily Papers
LLMの推論能力向上には、深遠な言語制約よりも単純な語彙制限が効果的
※ 日本語タイトル・ポイントはAIによる自動生成です。正確な内容は原論文をご確認ください。
ポイント
- 本研究では、LLMの推論能力に対する語彙制限の効果を検証し、特定の語彙と認知構造の関連性を検証しました。
- 認知構造の再構築という仮説に反し、浅い制約がモデルのデフォルトの生成経路を妨げ、推論能力を向上させることを示しました。
- 実験の結果、論理的推論に関与しない中立的な語彙制限が最も効果的であり、理論的な深さと効果が反比例することが判明しました。
Abstract
A previous study reported that E-Prime (English without the verb "to be") selectively altered reasoning in language models, with cross-model correlations suggesting a structural signature tied to which vocabulary was removed. I designed a replication with active controls to test the proposed mechanism: cognitive restructuring through specific vocabulary-cognition mappings. The experiment tested five conditions (unconstrained control, E-Prime, No-Have, elaborated metacognitive prompt, neutral filler-word ban) across six models and seven reasoning tasks (N=15,600 trials, 11,919 after compliance filtering). Every prediction from the cognitive restructuring hypothesis was disconfirmed. All four treatments outperformed the control (83.0%), including both active controls predicted to show null effects. The neutral filler-word ban, banning words like "very" and "just" with no role in logical inference, produced the largest improvement (+6.7 pp), while E-Prime produced the smallest (+3.7 pp). The four conditions ranked in perfect inverse order of theoretical depth. The cross-model correlation signature did not replicate (mean r=0.005). These results are consistent with a simpler mechanism: any constraint that forces a model off its default generation path acts as an output regularizer, improving reasoning by disrupting fluent but shallow response patterns. The shallowest constraints work best because they impose monitoring load with minimal conceptual disruption. I present these findings as a case study in discovery through disconfirmation.
Paper AI Chat
この論文のPDF全文を対象にAIに質問できます。
質問の例: