AI & ML New Capability

Enables Active Learning for tabular data without model retraining by iteratively optimizing the 'labeled context' of foundation models.

March 31, 2026

Original Paper

Active In-Context Learning for Tabular Foundation Models

Wilailuck Treerath, Fabrizio Pittorino

arXiv · 2603.27385

The Takeaway

It bypasses the 'cold-start' problem of traditional active learning (where uncertainty estimates are poor with few labels) by using the calibrated probabilistic predictions of In-Context Learning to select samples.

From the abstract

Active learning (AL) reduces labeling cost by querying informative samples, but in tabular settings its cold-start gains are often limited because uncertainty estimates are unreliable when models are trained on very few labels. Tabular foundation models such as TabPFN provide calibrated probabilistic predictions via in-context learning (ICL), i.e., without task-specific weight updates, enabling an AL regime in which the labeled context - rather than parameters - is iteratively optimized. We form