AI & ML Practical Magic

A 'default-no' architecture makes it physically impossible for an AI agent to act without independent verification.

April 23, 2026

Original Paper

Default-No: Contract-Gated Execution as Structural Governance for Autonomous AI Agents

SSRN · 6315058

The Takeaway

AI safety usually relies on guardrails that try to convince the model to be good. This system moves the power to a state manager that the agent cannot influence or see. The agent can suggest an action, but the hardware simply will not execute it until specific conditions are met. It turns safety from a conversation into a hard-coded law of the machine physics. Even a hijacked agent would be unable to cause harm because the system is locked by default. This is the ultimate defense against autonomous agents going rogue.

From the abstract

Autonomous AI agents are being deployed in domains where their decisions carry legal, financial, and operational consequences. Every major deployment framework for these agents shares a structural deficiency: the system operates on a default-yes permission model. The agent decides, then acts, then logs. Guardrails intercept after the decision has been made. Audit trails reside in storage systems that the executing agent or its operators can modify. Knowledge access is filtered from results after