Artificial General Intelligence and Ontological Continuity: A Structural Threshold for Agency
Abstract
Debates about Artificial General Intelligence (AGI) frequently conflate cross-domain competence with the emergence of autonomous artificial agents. Contemporary definitions of general intelligence converge on adaptive goal achievement across open environments under resource constraints. While these performance-oriented accounts capture general competence, they remain neutral regarding whether a system constitutes a unified deliberative subject. This article introduces the Ontological Continuity Condition (OCC) as a necessary structural threshold for agency attribution. A system satisfies OCC only if it exhibits non-interchangeability under reset, identity risk, and unified diachronic arbitration such that its historical trajectory is constitutive of its present deliberation. Adaptation under constraint is therefore necessary but insufficient for AGI understood as governance-relevant agency. Current AI systems, including large-scale language models and hybrid architectures, do not meet this threshold: they remain restartable, forkable, and externally corrigible. The article proposes a threshold definition of AGI integrating performance-based criteria with ontological continuity and argues that, absent such continuity, claims of AGI denote expanding general competence rather than the emergence of governance-relevant agency.
Keywords
- AI governance
- artificial general intelligence (AGI)
- agency attribution
- ontological continuity condition
- diachronic arbitration
- resource-bounded intelligence
- distributed cognition