Challenges the gold standard of Upper Confidence Bound (UCB) exploration in diversity-aware bandit tasks.
March 24, 2026
Original Paper
When Exploration Comes for Free with Mixture-Greedy: Do we need UCB in Diversity-Aware Multi-Armed Bandits?
arXiv · 2603.21716
The Takeaway
It demonstrates that for generative model selection, explicit exploration bonuses like UCB are actually counterproductive. The paper proves that diversity-aware objectives induce implicit exploration, allowing simple greedy strategies to outperform complex ones while simplifying the selection process.
From the abstract
Efficient selection among multiple generative models is increasingly important in modern generative AI, where sampling from suboptimal models is costly. This problem can be formulated as a multi-armed bandit task. Under diversity-aware evaluation metrics, a non-degenerate mixture of generators can outperform any individual model, distinguishing this setting from classical best-arm identification. Prior approaches therefore incorporate an Upper Confidence Bound (UCB) exploration bonus into the mi