economics Paradigm Challenge

The idea that military AI is "precise" is basically a legal lie used to bypass international laws.

March 13, 2026

Original Paper

Bad Algorithms and the Epistemic and Discursive Powers of Military AI

Henning Lahmann

SSRN · 6392939

The Takeaway

While most policy debates focus on making military AI 'less biased,' this paper demonstrates that these systems are inherently unable to classify individual humans as targets. It reveals that the discourse of AI 'accuracy' is primarily a strategy to make legally questionable deployments seem cautious and technologically required.

From the abstract

The article critiques the 'functionality assumption' regarding military AI and exposes the epistemic and discursive powers at states' disposal to rationalise the use of such models for targeting purposes. The development of so-called artificial intelligence-enabled decision support systems (AI-DSS) for military operations has recently come into focus in the context of Israel's onslaught on Gaza. However, states' and companies' claims concerning their allegedly incredible capabilities have largel