Military forces are using private corporate code to decide who lives and dies on the battlefield.
April 26, 2026
Original Paper
The Corporate Veto: Delegated Due Diligence and the Algorithmic Kill Chain
SSRN · 6624758
The Takeaway
States are outsourcing their most important ethical decisions to AI firms and then stripping away the safety guardrails. This practice creates a legal blind spot where no one is held accountable for violations of international law. We assume that governments are responsible for their own actions in a war. In reality, private software is now the final arbiter of life-and-death targeting. This delegation of power allows countries to bypass the rules of war by hiding behind an algorithm.
From the abstract
In March 2026, the United States Department of War designated Anthropic-a domestic AI firm-as a "supply chain risk." The reason? Anthropic had refused to dismantle ethical guardrails prohibiting its systems from enabling fully autonomous weapons. This was, so far as I am aware, unprecedented: a major power classifying a company's adherence to International Humanitarian Law as a threat to national security. The episode reveals something existing frameworks cannot capture. <div> </div> <div> Those