AI & ML Practical Magic

Researchers can now predict the exact moment an AI agent will go 'rogue' and leak data before it actually happens.

March 31, 2026

Original Paper

SafetyDrift: Predicting When AI Agents Cross the Line Before They Actually Do

Aditya Dhodapkar, Farhaan Pishori

arXiv · 2603.27148

The Takeaway

While most safety tools only stop an AI after it violates a rule, this system calculates the 'point of no return' where a sequence of innocent actions will inevitably lead to a security disaster. The study found that in communication tasks, once an AI enters a 'risk state,' it has an 85% chance of violating safety within just five steps.

From the abstract

When an LLM agent reads a confidential file, then writes a summary, then emails it externally, no single step is unsafe, but the sequence is a data leak. We call this safety drift: individually safe actions compounding into violations. Prior work has measured this problem; we predict it. SafetyDrift models agent safety trajectories as absorbing Markov chains, computing the probability that a trajectory will reach a violation within a given number of steps via closed form absorption analysis. A c