FederatedFactory solves the 'extreme non-IID' problem in Federated Learning by federating generative priors instead of model weights.
March 18, 2026
Original Paper
FederatedFactory: Generative One-Shot Learning for Extremely Non-IID Distributed Scenarios
arXiv · 2603.16370
The Takeaway
Standard Federated Learning collapses when local data distributions are mutually exclusive; this method restores performance from 11% to 90% accuracy in pathological scenarios. It changes how we think about privacy-preserving collaborative learning by using one-shot generative synthesis instead of gradient aggregation.
From the abstract
Federated Learning (FL) enables distributed optimization without compromising data sovereignty. Yet, where local label distributions are mutually exclusive, standard weight aggregation fails due to conflicting optimization trajectories. Often, FL methods rely on pretrained foundation models, introducing unrealistic assumptions. We introduce FederatedFactory, a zero-dependency framework that inverts the unit of federation from discriminative parameters to generative priors. By exchanging generati