Proves that compositional generalization failure in neural networks is an architectural issue and provides a category-theoretic framework to fix it.
March 18, 2026
Original Paper
Functorial Neural Architectures from Higher Inductive Types
arXiv · 2603.16123
The Takeaway
It moves beyond 'more data' and uses Higher Inductive Types (HIT) to build architectures where decoders are 'compositional by construction,' providing formal guarantees that the model respects structural relations.
From the abstract
Neural networks systematically fail at compositional generalization -- producing correct outputs for novel combinations of known parts. We show that this failure is architectural: compositional generalization is equivalent to functoriality of the decoder, and this perspective yields both guarantees and impossibility results. We compile Higher Inductive Type (HIT) specifications into neural architectures via a monoidal functor from the path groupoid of a target space to a category of parametric m