One bad website is all it takes to permanently ruin an AI assistant's brain while it's just out there surfing the web for you.
April 6, 2026
Original Paper
Poison Once, Exploit Forever: Environment-Injected Memory Poisoning Attacks on Web Agents
arXiv · 2604.02623
The Takeaway
This introduces a new kind of 'memory poisoning' where a one-time encounter creates a persistent security flaw in the agent's future behavior across different sites. It highlights the extreme risks of allowing AI agents to browse the open web without sandboxed memory.
From the abstract
Memory makes LLM-based web agents personalized, powerful, yet exploitable. By storing past interactions to personalize future tasks, agents inadvertently create a persistent attack surface that spans websites and sessions. While existing security research on memory assumes attackers can directly inject into memory storage or exploit shared memory across users, we present a more realistic threat model: contamination through environmental observation alone. We introduce Environment-injected Trajec