LiFR-Seg achieves high-frame-rate semantic segmentation using low-frame-rate cameras by propagating features through asynchronous event streams.
March 24, 2026
Original Paper
LiFR-Seg: Anytime High-Frame-Rate Segmentation via Event-Guided Propagation
arXiv · 2603.21115
The Takeaway
It solves the 'perceptual gap' problem in dynamic scenes, allowing standard hardware to perform like high-frequency sensors. This is highly relevant for real-time robotics and autonomous systems operating with constrained hardware budgets.
From the abstract
Dense semantic segmentation in dynamic environments is fundamentally limited by the low-frame-rate (LFR) nature of standard cameras, which creates critical perceptual gaps between frames. To solve this, we introduce Anytime Interframe Semantic Segmentation: a new task for predicting segmentation at any arbitrary time using only a single past RGB frame and a stream of asynchronous event data. This task presents a core challenge: how to robustly propagate dense semantic features using a motion fie