Massive activation outliers in Transformers are an adaptive response to 'gradient sinks' during training, rather than just an inference-time quirk.
March 19, 2026
Original Paper
Attention Sinks Induce Gradient Sinks
arXiv · 2603.17771
The Takeaway
This research links the phenomena of attention sinks and massive activations to backpropagation dynamics. By introducing 'V-scale' to adjust value-path gradients, practitioners can now suppress outliers that cause quantization issues without losing the benefits of attention sinks, simplifying model deployment.
From the abstract
Attention sinks and massive activations are recurring and closely related phenomena in Transformer models. Existing studies have largely focused on the forward pass, making it unclear whether their connection is direct or mediated by a training-time mechanism. We study this question from the perspective of backpropagation. Empirically and theoretically, we show that under causal mask, attention sinks can induce pronounced gradient concentration, which we term gradient sinks. Furthermore, in pre-