Scholarship

Research and working papers addressing AI governance, institutional liability, fiduciary responsibility, and the epistemic conditions of agency, published to support professional scrutiny and policy dialogue.

Authority without Authorship: Delegation Thresholds in Agentic AI Systems

by Peter Kahl; Preprint (2026)

This article introduces the Delegation Threshold to clarify when AI and digital infrastructure begin to exercise governance-relevant authority. When discretionary functions are embedded into persistent, non-optional systems, clients and users organise decisions around their outputs. Responsibility therefore shifts upstream to architecture, integration, and oversight—making governance a design property, not a downstream control.

From Optional Safety to Architectural Responsibility: AI Governance after Models

by Peter Kahl; Preprint (2026)

Argues that as AI shifts from episodic models to persistent, embedded infrastructure, governance can no longer remain an optional overlay. Speed reclassifies delay as risk, diffusing accountability and exposing institutions to structural liability. The article identifies minimal design conditions—contestability, revisability, halt-capacity, and traceability—required for legitimate authority.

Why Most ‘Agentic AI’ Is Not Agentic: Continuity, Authorship, and the Structural Conditions of Agency

by Peter Kahl; Preprint (2026)

Most systems described as ‘agentic AI’ are not agents but optimisers embedded in governance architectures. Agency requires continuity of evaluative authority and authorship of standards—conditions neutralised by resetability, external alignment, and liability design. The real risk is misattributed agency, which displaces responsibility and obscures institutional accountability.

Episodic Agency and the Epistemic Conditions of Responsibility

by Peter Kahl; Preprint (2026)

Reframes agency as episodic rather than continuous, showing that responsibility arises when inherited frameworks fail and actors must author the normative terms of action. For AI governance, the implication is diagnostic: treating optimisation systems as agents mislocates accountability. Infrastructure requires identifiable authorship; without it, liability, oversight, and institutional responsibility become structurally unstable.

The Frame-Stability Problem in Decision-Theoretic Accounts of Agency

by Peter Kahl; Preprint (2026)

Identifies the ‘frame-stability problem’: decision and planning theories assume a unified evaluator but do not explain why one must persist. Argues that agency arises when systems must preserve evaluative authority under diachronic uncertainty through ‘unified arbitration’. Implication: most AI optimisers coordinate behaviour without owning commitments—clarifying accountability limits in infrastructure design.

Distributed Cognition as Epistemic Infrastructure: A Taxonomy of Collective Epistemic Systems

by Peter Kahl; Preprint (2026)

Challenges the assumption that distributed cognition is inherently reliable, arguing that collective systems are engineered epistemic infrastructures whose performance depends on governance architecture. Develops a taxonomy of markets, platforms, open source, deliberation, and institutions, showing how closure, incentives, and contestability shape failure modes—directly informing AI governance and infrastructure design.

From Movable Type to Machine Reasoning: Media, Artificial Intelligence, and the Transformation of Bounded Cognition

by Peter Kahl; Preprint (2026)

Reinterprets media technologies as cognitive infrastructure that reshapes how bounded human reasoning operates. Argues that AI marks a categorical shift by intervening inside deliberation—compressing representations, expanding counterfactual exploration, and lowering revision costs. Introduces a typology of scaffolds, crutches, and substitutes to guide governance, highlighting revisability as a design condition for responsible deployment.

Epistemic Humility as Fiduciary Obligation: Entrusted Discretion and Responsibility for Belief

by Peter Kahl; Preprint (2026)

Argues that fiduciary law already embeds epistemic duties governing how decision-makers form and close beliefs under conditions of entrusted discretion. Epistemic discipline is a legality condition of authority, not a virtue: failures can constitute fiduciary wrongs even without harm. For AI governance, the model clarifies why operators of decision infrastructure bear responsibility for belief-structures shaping delegated judgment.

The Third Enclosure Movement: Rethinking AI Regulation through a Taxonomy of Restricted Knowledge

by Peter Kahl; Working paper (2025)

Defines the ‘Third Enclosure Movement’ as the consolidation of control over AI datasets, models, compute, and evaluation through legal, technical, epistemic, and sovereign restrictions. Introduces a taxonomy of restricted knowledge and argues for partial commons reforms that preserve innovation while clarifying liability, strengthening auditability, and protecting shared epistemic agency in AI governance.