Reduces long-context inference latency by 26.4x using a training-free, structure-aware prompt compression framework.
March 23, 2026
Original Paper
BEAVER: A Training-Free Hierarchical Prompt Compression Method via Structure-Aware Page Selection
arXiv · 2603.19635
The Takeaway
Unlike token-pruning methods that cause semantic fragmentation, BEAVER uses hierarchical page-level selection and sentence smoothing to preserve discourse integrity. It maintains high fidelity in multi-needle retrieval tasks where other compression methods fail, making it a highly scalable solution for RAG and long-document processing.
From the abstract
The exponential expansion of context windows in LLMs has unlocked capabilities for long-document understanding but introduced severe bottlenecks in inference latency and information utilization. Existing compression methods often suffer from high training costs or semantic fragmentation due to aggressive token pruning. In this paper, we propose BEAVER, a novel training-free framework that shifts compression from linear token removal to structure-aware hierarchical selection. BEAVER maximizes har