A new class of mathematical functions represents deep neural networks with polynomial complexity instead of exponential size.
April 23, 2026
Original Paper
The Multi-Block DC Function Class: Theory, Algorithms, and Applications
arXiv · 2604.17560
The Takeaway
Mathematical optimization of deep learning has always been limited by the massive scale of the functions involved. This new Multi-Block DC class changes how we decompose these complex models. It allows for much more efficient training and fine-tuning by simplifying the underlying math. We can now optimize networks that were previously considered too large for standard solvers. This could lead to a massive leap in how quickly we can train high-performance AI. It replaces brute-force searching with elegant, efficient mathematics.
From the abstract
We present the Multi-Block DC (BDC) class, a rich class of structured nonconvex functions that admit a DC ("difference-of-convex") decomposition across parameter blocks. This multi-block class not only subsumes the usual DC programming, but also turns out to be provably more powerful. Specifically, we demonstrate how standard models (e.g., polynomials and tensor factorization) must have DC decompositions of exponential size, while their BDC formulation is polynomial. This separation in complexit