SeriesFusion
Science, curated & edited by AI
Paradigm Challenge  /  AI

A global ban on superintelligent AI might be the most profitable move for a country's own survival.

Game theory analysis suggests that nations may choose a moratorium when the risk of losing control exceeds the benefits of technology. Rational self-interest drives this decision rather than purely moral concerns or international pressure. Many observers believe a geopolitical arms race for AI dominance is an unavoidable path toward disaster. This framework shows that countries will stop racing if the threat to their internal stability becomes too great. Global cooperation on AI safety may emerge naturally from the desire of states to maintain their own power.

Original Paper

Are we Doomed to an AI Race? Why Self-Interest Could Drive Countries Towards a Moratorium on Superintelligence

Edward Roussel, Lode Lauwaert, Torben Swoboda, Grant Ramsey, Risto Uuk, Leonard Dung

arXiv  ·  2605.01297

This paper uses game theory to argue that, contrary to the prevailing view, a moratorium on Artificial Superintelligence (ASI) can be in a state's self-interest. By formalizing trategic interactions between geopolitical superpowers, we model the trade-off between the benefits of technological supremacy and the catastrophic risks of uncontrolled ASI. The analysis reveals that as the perceived cost of loss of control increases sufficiently relative to other parameters, it becomes in each state's s