AI & ML Scaling Insight

Challenges the monotonic 'bigger is better' scaling paradigm by proving that institutional fitness peaks at an environment-dependent scale.

March 17, 2026

Original Paper

The Institutional Scaling Law: Non-Monotonic Fitness, Capability-Trust Divergence, and Symbiogenetic Scaling in Generative AI

Mark Baciak, Thomas A. Cellucci

arXiv · 2603.14126

The Takeaway

The paper provides mathematical proof of 'Capability-Trust Divergence,' where scaling a model beyond a certain point can decrease its institutional utility due to trust, sovereignty, and cost factors. This suggests a phase transition away from frontier generalists toward orchestrated systems of smaller, domain-specialized models.

From the abstract

Classical scaling laws model AI performance as monotonically improving with model size. We challenge this assumption by deriving the Institutional Scaling Law, showing that institutional fitness -- jointly measuring capability, trust, affordability, and sovereignty -- is non-monotonic in model scale, with an environment-dependent optimum N*(epsilon). Our framework extends the Sustainability Index of Han et al. (2025) from hardware-level to ecosystem-level analysis, proving that capability and tr