Supervisory Primacy
The architectural form of human authority in adaptive AI systems.
Abstract
Human-in-the-loop (HITL) frameworks for AI systems are increasingly treated as policy-level commitments — "the human can always override" — when their operational effectiveness requires that they be architectural properties of the system itself. A policy-level HITL commitment is disabled by a configuration flag; an architectural HITL property is disabled by rebuilding from source. This paper introduces Supervisory Primacy as a design principle: the human-authoritative state is the architectural default for consequential actions in adaptive AI orchestration systems, with the AI proposing and the human disposing, every consequential action carrying a mandatory authorization audit trail, and the eight non-negotiable operational integrity constraints implemented as enforceable safeguards in deployment builds rather than as runtime policy. Supervisory Primacy is consistent with the human oversight requirements of EU AI Act Article 14 [1], operates within the levels-of-automation taxonomy of Parasuraman, Sheridan, and Wickens [2], and is illustrated using MILO, a patent-pending adaptive AI orchestrator [3] submitted to the U.S. Department of Energy under the Genesis Mission [4]. The contribution is at the architectural level: Supervisory Primacy is not a new HITL taxonomy; it is the structural design that makes HITL load-bearing rather than retrofittable.
Summary
Plain Language Summary. When an AI system operates in a setting where mistakes have severe consequences — a power grid control room, a nuclear facility, an operating room, an autonomous robotic line — the rule that "a human can always override the AI" must be more than a promise. It must be built into the architecture, so that disabling the override would require rebuilding the system from source, not toggling a setting. This paper names that architectural property Supervisory Primacy: the human-authoritative state is the default for any consequential action; the AI proposes and the human disposes; every consequential action carries a mandatory audit trail by architecture, not by policy. The principle does not invent human oversight; it specifies the structural form that makes existing human-oversight regulations operationally effective.
Key takeaways
- Make human authority a structural property of the command path, not a runtime preference.
- Persist consequential decisions before action so oversight can be audited after the fact.
- Compose human-in-the-loop control with safety and AI-risk frameworks without claiming certification.
Concept map
Sources to follow
Use these official references as starting points for the standards context in the full paper.