Who Is Responsible When AI Goes Wrong? Why “Is AI Conscious?” Is the Wrong Question for Governance

by Peter Kahl; Lex et Ratio Ltd Preprint (2026)

Abstract

This paper argues that the central question in AI governance is not whether artificial systems are conscious, but who specifies, authorises, and maintains the evaluative standards under which their outputs acquire institutional force. The consciousness question is philosophically interesting but governance-misdirecting. It displaces attention from the upstream architecture of delegated discretion and from the human and organisational actors who define admissibility, optimisation criteria, and operational scope. The paper develops a structural account of responsibility centred on evaluative authorship, epistemic generativity, and normative attribution across artificial and distributed systems. On this view, responsibility does not vanish because a system is complex, adaptive, or opaque. It remains traceable to those who design, approve, deploy, and govern the evaluative framework within which optimisation proceeds. The result is a practical governance model for boards, regulators, counsel, and institutional decision-makers: responsibility attaches not to metaphysical agency as such, but to the authorised construction and maintenance of decision architecture.

Keywords

  • AI governance
  • responsibility attribution
  • evaluative authorship
  • normative attribution
  • epistemic agency
  • epistemic generativity
  • distributed cognition
  • delegation thresholds
  • mandate design
  • fiduciary governance

← Back to Publications