AI & ML Breaks Assumption

Discovers that pretraining Implicit Neural Representations (INRs) on structured $1/f^\alpha$ noise performs as well as data-driven initialization.

April 1, 2026

Original Paper

The Surprising Effectiveness of Noise Pretraining for Implicit Neural Representations

Kushal Vyas, Alper Kayabasi, Daniel Kim, Vishwanath Saragadam, Ashok Veeraraghavan, Guha Balakrishnan

arXiv · 2603.29034

The Takeaway

It shows that the benefits of data-driven initialization for INRs can be replicated by simple statistical noise structures found in nature. This allows practitioners to achieve high-performance INR training in domains where pre-existing datasets are unavailable or expensive.

From the abstract

The approximation and convergence properties of implicit neural representations (INRs) are known to be highly sensitive to parameter initialization strategies. While several data-driven initialization methods demonstrate significant improvements over standard random sampling, the reasons for their success -- specifically, whether they encode classical statistical signal priors or more complex features -- remain poorly understood. In this study, we explore this phenomenon through a series of expe