A stabilization mechanism for adapting LLMs to time-series tasks that reduces memory footprint by up to 1,776x.
April 1, 2026
Original Paper
One-for-All: A Lightweight Stabilized and Parameter-Efficient Pre-trained LLM for Time Series Forecasting
arXiv · 2603.29756
The Takeaway
The use of Gaussian Rank-Stabilized LoRA (rsLoRA) ensures gradient stability at low ranks, enabling state-of-the-art forecasting performance on edge devices where large foundational models were previously unusable.
From the abstract
We address the challenge of adapting pre-trained Large Language Models (LLMs) for multivariate time-series analysis, where their deployment is often hindered by prohibitive computational and memory demands. Our solution, One-for-All, introduces Gaussian Rank-Stabilized Low-Rank Adapters (rsLoRA) to enable parameter-efficient fine-tuning of frozen LLMs. While inspired by LoRA, rsLoRA introduces a mathematically grounded rank-stabilization mechanism that enables provable gradient stability at low