次回の更新記事:誤解を招きやすいAI用語6選、技術語なのに揺れる意味(公開予定日:2026年04月30日)
AIDB Daily Papers

暴力の概念:人間とAIの判断に関する比較研究

原題: On the Concept of Violence: A Comparative Study of Human and AI Judgments
著者: Mariachiara Stellato, Francesco Lancia, Chiara Galeazzi, Nico Curti
公開日: 2026-02-19 | 分野: LLM NLP 安全性 解釈性 人間

※ 日本語タイトル・ポイントはAIによる自動生成です。正確な内容は原論文をご確認ください。

ポイント

  • 暴力の概念に対する人間とAIの判断を比較する研究を実施しました。
  • 暴力という曖昧な概念を通して、LLMが道徳的判断をどのように行うかを調査しました。
  • LLMは人間の解釈を単一の出力に変換し、公共の推論を仲介する可能性が示唆されました。

Abstract

Background: What counts as violence is neither self-evident nor universally agreed upon. While physical aggression is prototypical, contemporary societies increasingly debate whether exclusion, humiliation, online harassment or symbolic acts should be classified within the same moral category. At the same time, Large Language Models (LLMs) are being consulted in everyday contexts to interpret and label complex social behaviors. Whether these systems reproduce, reshape or simplify human conceptions of violence remains an open question. Methods: Here we present a systematic comparison between human judgements and LLM classifications across 22 scenarios carefully designed to be morally dividing, spanning from physical and verbally aggressive behavior, relational dynamics, marginalization, symbolic actions and verbal expressions. Human responses were compared with outputs from multiple instruction-tuned models of varying sizes and architectures. We conducted global, sentence-level and thematic-domain analyses, and examined variability across models to assess patterns of convergence and divergence. Findings: This study treats violence as a strategically chosen proxy through which broader belief formation dynamics can be observed. Violence is not the focus of the study, but it serves as a tool to investigate broader analysis. It enables a structured investigation of how LLMs operationalize ambiguous moral constructs, negotiate conceptual boundaries, and transform plural human interpretations into singular outputs. More broadly, the findings contribute to ongoing debates about the epistemic role of conversational AI in shaping everyday interpretations of harm, responsibility and social norms, highlighting the importance of transparency and critical engagement as these systems increasingly mediate public reasoning.

Paper AI Chat

この論文のPDF全文を対象にAIに質問できます。

質問の例:

AIチャット機能を利用するには、ログインまたは会員登録(無料)が必要です。

会員登録 / ログイン

💬 ディスカッション

ディスカッションに参加するにはログインが必要です。

ログイン / アカウント作成 →

関連するAIDB記事