Peter Kahl
Founder, Lex et Ratio Ltd
Building the Next Layer of AI
Peter Kahl is building the next layer of artificial intelligence: systems that operate on evaluation itself, not just optimise within it.
His work identifies a hard architectural limit in current AI systems. Today’s models optimise against externally specified objectives, but cannot revise the standards that define success, failure, or admissible action. This constraint underlies persistent problems such as out-of-distribution failure, objective misspecification, and loss of control in complex environments.
Kahl’s core contribution is epistemic generativity: a formal threshold (ℒ ∈ 𝒪(S)) at which evaluative frameworks become objects of system-level operation rather than fixed constraints. Systems below this threshold optimise within given criteria. Systems above it can detect breakdown in those criteria and reconstruct them under constraint. This defines a new design space for AI.
Building on this, his Continuity–Rupture–Generativity (CRG) framework specifies how systems can remain operational under conditions where their evaluative assumptions fail. Instead of treating failure as degraded performance, these architectures treat it as evidence against the current evaluative model itself. The result is a shift from optimisation-driven systems to evaluative systems: architectures capable of adapting at the level where objectives are defined.
Before this work, Kahl spent two decades in Silicon Valley designing reliability-critical semiconductor systems, including analogue and mixed-signal architectures deployed in aerospace, mobile, and storage technologies. In these environments, failure was never treated as a local defect but as a structural consequence of design. This engineering discipline—designing out failure at the architectural level—directly informs his approach to AI.
His current work focuses on building evaluative AI architectures that address structural failure modes rather than optimising around them. This includes system designs in which evaluation, constraint, and adaptation are integrated into the execution layer, enabling systems to remain coherent under changing conditions.
As AI becomes infrastructure, control over evaluation becomes the critical layer. This is where system behaviour is determined, where failure originates, and where value concentrates. Kahl’s work targets this layer directly.
The governance implications follow from the architecture. Responsibility tracks control over evaluative structure, not behaviour. Systems that do not author their standards cannot bear responsibility; it remains with those who design and constrain ℒ. As decision-making becomes embedded in technical systems, oversight must move upstream to architecture.
Through Lex et Ratio Ltd, Kahl develops and applies these architectures in high-stakes environments where system behaviour is consequential and cannot be meaningfully overridden. His work sits at the boundary between AI system design, reliability engineering, and institutional deployment.