AI & ML First Ever

It only takes one hacked computer in a massive network to quietly break an AI's moral compass while it's still learning.

April 6, 2026

Original Paper

Backdoor Attacks on Decentralised Post-Training

Oğuzhan Ersoy, Nikolay Blagoev, Jona te Lintelo, Stefanos Koffas, Marina Krček, Stjepan Picek

arXiv · 2604.02372

The Takeaway

This exposes a critical vulnerability in how we build large models across multiple servers, showing that you don't need to control the whole training process to bake in a permanent backdoor. It raises significant security concerns for the future of decentralized AI development.

From the abstract

Decentralised post-training of large language models utilises data and pipeline parallelism techniques to split the data and the model. Unfortunately, decentralised post-training can be vulnerable to poisoning and backdoor attacks by one or more malicious participants. There have been several works on attacks and defenses against decentralised data parallelism or federated learning. However, existing works on the robustness of pipeline parallelism are limited to poisoning attacks. To the best of