Self-organizing AI systems (NCAs) are far more unstable and dynamic than the people who built them even realized.
April 16, 2026
Original Paper
Stability and Geometry of Attractors in Neural Cellular Automata
arXiv · 2604.12720
The Takeaway
Neural Cellular Automata (NCAs) were thought to learn stable 'fixed-point' attractors—meaning they reach a state and stay there. This paper proves that's a myth; NCAs actually exhibit chaotic, oscillatory, and periodic behaviors that only look stable from a distance. This 'paradigm challenge' shows that these systems are living on the edge of chaos. For researchers building self-repairing or self-organizing systems, this means you can't assume a model will 'settle' into a perfect state. It introduces a new level of complexity to NCA design and highlights the need for better stability controls in decentralized AI. It's a reminder that 'emergent' systems are often weirder than they appear.
From the abstract
Throughout the literature on Neural Cellular Automata (NCAs), it is often taken for granted that the systems learn attractors. This is shown through evolving the system for many timesteps and noting visual similarity to the goal state. There remain many questions after such an analysis. Namely, what kind of attractors do we have? Is their behavior ordered or chaotic? Can we estimate stability over very long time horizons? What really happens in the attractor when perturbations are applied? In th