You don’t even need a hacker to leak your data; your AI assistant might just blab your secrets to another user during a regular chat.
April 3, 2026
Original Paper
No Attacker Needed: Unintentional Cross-User Contamination in Shared-State LLM Agents
arXiv · 2604.01350
The Takeaway
In shared AI systems, a normal interaction from one person can 'poison' the assistant's memory for everyone else. This creates a massive security gap where data contamination happens by default rather than by attack.
From the abstract
LLM-based agents increasingly operate across repeated sessions, maintaining task states to ensure continuity. In many deployments, a single agent serves multiple users within a team or organization, reusing a shared knowledge layer across user identities. This shared persistence expands the failure surface: information that is locally valid for one user can silently degrade another user's outcome when the agent reapplies it without regard for scope. We refer to this failure mode as unintentional