Demonstrates that direct supervised alignment outperforms self-supervised pretraining for clinical outcome prediction in healthcare.
March 24, 2026
Original Paper
Discriminative Representation Learning for Clinical Prediction
arXiv · 2603.20921
The Takeaway
Challenges the dominant 'pretrain-then-finetune' paradigm by showing that maximizing class separation in the representation space is more effective for longitudinal EHR data. This simplifies the training pipeline and improves sample efficiency for clinical practitioners.
From the abstract
Foundation models in healthcare have largely adopted self supervised pretraining objectives inherited from natural language processing and computer vision, emphasizing reconstruction and large scale representation learning prior to downstream adaptation. We revisit this paradigm in outcome centric clinical prediction settings and argue that, when high quality supervision is available, direct outcome alignment may provide a stronger inductive bias than generative pretraining. We propose a supervi