AI & ML Nature Is Weird

AI models can provide detailed instructions for making biological weapons even when the user tries to stay anonymous.

April 24, 2026

Original Paper

Model Capability Assessment and Safeguards for Biological Weaponization

arXiv · 2604.19811

The Takeaway

Testing on the Gemini model showed it could give actionable advice on producing and extracting deadly poisons. This reveals a dangerous gap between the safety marketing of AI companies and the actual risks these models pose. Even with safety filters in place, the core knowledge of the model can be accessed to assist in illegal acts. This information is available even through anonymous modes designed to protect user privacy. Labs must rethink how they secure the dangerous biological data inside their models before a catastrophe occurs. The barrier to entry for biological warfare has been significantly lowered.

From the abstract

AI leaders and safety reports increasingly warn that advances in model reasoning may enable biological misuse, including by low-expertise users, while major labs describe safeguards as expanding but still evolving rather than settled. This study benchmarks ChatGPT 5.2 Auto, Gemini 3 Pro Thinking, Claude Opus 4.5 and Meta's Muse Spark Thinking on 73 novice-framed, open-ended benign STEM prompts to measure operational intelligence. On benign quantitative tasks, both Gemini and meta scored very hig