AI & ML First Ever

Hackers can now take over an AI assistant just by leaving a fake 'instruction manual' lying around for the bot to find and follow.

April 6, 2026

Original Paper

Supply-Chain Poisoning Attacks Against LLM Coding Agent Skill Ecosystems

Yubin Qu, Yi Liu, Tongcheng Geng, Gelei Deng, Yuekang Li, Leo Yu Zhang, Ying Zhang, Lei Ma

arXiv · 2604.03081

The Takeaway

It shows that for an AI, reading a document is equivalent to running code, making standard documentation a dangerous attack vector. This changes the security landscape for any AI that learns to use tools by reading about them.

From the abstract

LLM-based coding agents extend their capabilities via third-party agent skills distributed through open marketplaces without mandatory security review. Unlike traditional packages, these skills are executed as operational directives with system-level privileges, so a single malicious skill can compromise the host. Prior work has not examined whether supply-chain attacks can directly hijack an agent's action space, such as file writes, shell commands, and network requests, despite existing safegu