AI & ML Scaling Insight

Provides the first theoretical proof that dataset distillation efficiently encodes the low-dimensional structure of non-linear tasks.

March 17, 2026

Original Paper

Dataset Distillation Efficiently Encodes Low-Dimensional Representations from Gradient-Based Learning of Non-Linear Tasks

Yuri Kinoshita, Naoki Nishikawa, Taro Toyoizumi

arXiv · 2603.14830

The Takeaway

This moves dataset distillation from a purely empirical hack to a theoretically grounded technique, quantifying how intrinsic task dimensionality dictates the achievable compression rate for synthetic training data.

From the abstract

Dataset distillation, a training-aware data compression technique, has recently attracted increasing attention as an effective tool for mitigating costs of optimization and data storage. However, progress remains largely empirical. Mechanisms underlying the extraction of task-relevant information from the training process and the efficient encoding of such information into synthetic data points remain elusive. In this paper, we theoretically analyze practical algorithms of dataset distillation a