From Optional Safety to Architectural Responsibility: AI Governance after Models

by Peter Kahl; Lex et Ratio Ltd Preprint (2026)

Abstract

As artificial intelligence systems move beyond episodic model deployment toward agentic and physically embedded architectures, prevailing approaches to AI safety and governance encounter a structural limit. This article argues that governance can no longer be treated as an optional policy overlay once AI systems operate persistently, at speed, and under conditions of dependency and non-exit. Drawing on recent shifts in defence and industrial AI deployment, it diagnoses a transformation in justificatory structure in which delay itself is reclassified as systemic risk, while failure and misalignment are reframed as tolerable, local costs. The central claim is that this shift renders existing governance frameworks inadequate not because they lack ethical ambition, but because they presuppose reversibility, episodic oversight, and downstream correction that no longer hold for agentic and physically embedded systems. Building on a structural account of agency, an analysis of distributed cognition as socio-technical infrastructure, and fiduciary theories of entrusted discretion, the article reframes AI governance as an architectural condition of legitimate authority. It concludes by identifying minimal design-level governance conditions—contestability, revisability, halt-capacity, and responsibility traceability—required for legitimacy and accountability to remain intelligible under velocity-driven deployment.

Keywords

  • AI governance
  • infrastructure
  • liability
  • contestability
  • traceability
  • auditability
  • oversight

← Back to Publications