LassoFlexNet matches or beats leading tree-based models on tabular data while maintaining Lasso-like interpretability through per-feature embeddings and a group Lasso mechanism.
March 24, 2026
Original Paper
LassoFlexNet: Flexible Neural Architecture for Tabular Data
arXiv · 2603.20631
The Takeaway
Bridging the gap between deep learning and Gradient Boosted Trees for tabular data is a major challenge; this architecture does so by incorporating inductive biases like feature heterogeneity and axis alignment. It offers practitioners a differentiable alternative to XGBoost that is both high-performing and inherently interpretable.
From the abstract
Despite their dominance in vision and language, deep neural networks often underperform relative to tree-based models on tabular data. To bridge this gap, we incorporate five key inductive biases into deep learning: robustness to irrelevant features, axis alignment, localized irregularities, feature heterogeneity, and training stability. We propose \emph{LassoFlexNet}, an architecture that evaluates the linear and nonlinear marginal contribution of each input via Per-Feature Embeddings, and spar