About the Practice
Lex et Ratio Ltd is an independent advisory practice specialising in AI governance, institutional liability, and fiduciary responsibility. Founded by Peter Kahl, it supports organisations whose decision architectures must withstand legal, regulatory, and public scrutiny.
In practice, advanced AI systems are becoming embedded as infrastructure within organisations operating under legal, fiduciary, and democratic constraints. When these systems fail, the difficulty is rarely technical; it is institutional. Organisations must be able to explain who was responsible, on what basis decisions were made, and whether existing duties were discharged in a defensible manner.
Prevailing approaches treat these questions as downstream compliance matters, addressed through performance metrics, retrospective oversight, or assurances of ‘responsible use’. As systems scale, this becomes insufficient. Responsibility diffuses, authority grows opaque, and claims of control exceed what institutions can justify under legal or regulatory scrutiny.
A central source of exposure lies in the treatment of AI systems as quasi-agents. Language of autonomy obscures the fact that responsibility attaches not to optimisation capacity but to institutional authorship, delegated authority, and conditions of non-exit. Where decisions are materially shaped by systems lacking clear lines of epistemic accountability, liability arises less from malfunction than from the collapse of governance narratives.
Lex et Ratio addresses this problem at the level where it originates: the architecture of governance itself. Governance is treated not as an ethical overlay or compliance exercise, but as infrastructure — designed, constrained, and tested with the discipline expected of reliability-critical systems.
The work focuses on identifying and correcting structural failure modes in which responsibility becomes evadable: environments characterised by dependency, constrained exit, and epistemic opacity, where affected parties cannot meaningfully contest or revise AI-mediated decisions. In such contexts, familiar safeguards — human-in-the-loop controls, audit frameworks, and transparency statements — often fail to allocate responsibility in ways that withstand legal or regulatory examination.
The objective is not to make AI more powerful, but to ensure its deployment remains governable. This requires clarifying liability boundaries, aligning authority with actual control, and designing institutional conditions under which intelligent systems remain accountable to those subject to them.
Peter Kahl advises on these questions where failure would be costly: regulated domains, infrastructure contexts, and environments exposed to legal, political, or fiduciary scrutiny.