AI & ML New Capability

Introduces StatePlane, a model-agnostic memory architecture that enables long-horizon AI reasoning without expanding the context window or KV cache.

March 17, 2026

Original Paper

StatePlane: A Cognitive State Plane for Long-Horizon AI Systems Under Bounded Context

Sasank Annapureddy, John Mulcahy, Anjaneya Prasad Thamatani

arXiv · 2603.13644

The Takeaway

It treats memory as an evolving 'state plane' governed by cognitive principles (segmentation, selective encoding, and adaptive forgetting) rather than static storage. This allows LLMs/SLMs to maintain coherence over multi-session, long-running tasks that typically exceed hardware context constraints.

From the abstract

Large language models (LLMs) and small language models (SLMs) operate under strict context window and key-value (KV) cache constraints, fundamentally limiting their ability to reason coherently over long interaction horizons. Existing approaches -- extended context windows, retrieval-augmented generation, summarization, or static documentation -- treat memory as static storage and fail to preserve decision-relevant state under long-running, multi-session tasks. We introduce StatePlane, a model-a