Structural Conditions of Agency and the Misattribution of Agentic Artificial Intelligence
Abstract
Recent discourse increasingly describes advanced artificial intelligence systems as ‘agentic’. Planning-capable language models, autonomous workflows, and multi-agent architectures are said to exhibit agency insofar as they pursue goals, initiate actions, and coordinate behaviour over time. This article argues that such characterisations rest on a structural conflation. Drawing on a continuity-based account of agency, it shows that most systems labelled ‘agentic’ lack the conditions under which agency can arise at all. Agency, the article argues, is not a behavioural achievement but a structural response to continuity pressure: the need to preserve a unified locus of evaluative authority across incompatible future trajectories. Optimisation, planning, and coordination presuppose such unity; they do not generate it. Systems that are resettable, substitutable, or governed by externally specified evaluative standards may optimise effectively while lacking authorship of evaluative authority. Applying this criterion to contemporary AI architectures and governance practices, the article concludes that the principal danger lies not in machines becoming agents, but in attributing agency where continuity, authorship, and responsibility are structurally absent.
Keywords
- agency
- continuity
- AI governance
- responsibility
- liability
- authorship
- infrastructure