economics Paradigm Challenge

Artificial intelligence breaks the logic of nuclear war by letting leaders blame machines for an attack.

April 20, 2026

Original Paper

Algorithmic Plausible Deniability: The Structural Erosion of Deterrence in the Age of Artificial Intelligence

SSRN · 6482039

The Takeaway

International stability relies on the clear threat of retaliation between nations. Delegating military decisions to algorithms creates a loophole called algorithmic plausible deniability. If an AI triggers an offensive action, the state can claim it was an unintended technical glitch. This uncertainty makes it impossible for an opponent to know if they were attacked on purpose. We once believed that automated defense would make the world safer and more predictable. Instead, the presence of AI makes the global stage more unstable because the threat of accountability has vanished.

From the abstract

Classical deterrence theory assumes that hostile actions can be attributed to identifiable actors whose intentions can be inferred and who can be credibly threatened with punishment. This article introduces <i>Algorithmic Plausible Deniability</i> APD: a structural condition in which the delegation of decisions to artificial intelligence systems generates persistent uncertainty over authorship and intent, thereby weakening deterrence even when states act rationally and prefer stability. Unlike s