The Duck Criterion in Artificial Intelligence: Evaluative Structure and the Attribution of Agency, Authority, and Responsibility

by Peter Kahl; Lex et Ratio Ltd Preprint (2026)

Abstract

This article introduces a structural account of agency, authority, and responsibility in artificial and distributed cognitive systems. It argues that current debates conflate behavioural performance with evaluative control. The article defines evaluative structure in terms of the source and revisability of evaluative standards, and formulates the Duck Criterion: a system qualifies as epistemically generative only if it participates in determining and revising those standards. Systems that optimise within fixed criteria are therefore maintenance systems, regardless of behavioural sophistication. Contemporary AI systems—including reinforcement learning from human feedback and preference-based alignment—operate under exogenous evaluative structures and do not satisfy conditions for epistemic authority. Responsibility accordingly tracks control over evaluative structure rather than output or causal contribution. The framework clarifies limits of behavioural tests and provides a unified basis for analysing agency and accountability across artificial and socio-technical systems.

Keywords

  • AI governance
  • epistemic agency
  • epistemic authority
  • epistemic generativity
  • evaluative structure
  • artificial intelligence
  • AI alignment
  • reinforcement learning
  • RLHF
  • distributed cognition
  • extended cognition
  • responsibility
  • accountability
  • philosophy of AI
  • social epistemology

← Back to Publications