Solves the long-standing trade-off in low-rank matrix recovery by achieving both optimal sample complexity and fast convergence.
April 2, 2026
Original Paper
Scaled Gradient Descent for Ill-Conditioned Low-Rank Matrix Recovery with Optimal Sampling Complexity
arXiv · 2604.00060
The Takeaway
Previously, practitioners had to choose between algorithms that required more data (sub-optimal sample complexity) or those that converged slowly on ill-conditioned data. This paper proves that Scaled Gradient Descent (ScaledGD) achieves the theoretical floor for data requirements while maintaining logarithmic iteration complexity, making large-scale matrix completion and sensing significantly more efficient.
From the abstract
The low-rank matrix recovery problem seeks to reconstruct an unknown $n_1 \times n_2$ rank-$r$ matrix from $m$ linear measurements, where $m\ll n_1n_2$. This problem has been extensively studied over the past few decades, leading to a variety of algorithms with solid theoretical guarantees. Among these, gradient descent based non-convex methods have become particularly popular due to their computational efficiency. However, these methods typically suffer from two key limitations: a sub-optimal s