AI & ML Breaks Assumption

In-context memory for LLMs is fundamentally unreliable due to compaction loss and goal drift, but structured 'Knowledge Objects' provide a 252x cheaper and 100% accurate alternative.

March 19, 2026

Original Paper

Facts as First Class Objects: Knowledge Objects for Persistent LLM Memory

Oliver Zahn, Simran Chana

arXiv · 2603.17781

The Takeaway

The paper proves that even frontier models lose 60% of facts during context window 'compaction' (summarization/management). It introduces a hash-addressed tuple system that maintains perfect recall and eliminates the high token costs and reliability issues of long-context prompts.

From the abstract

Large language models increasingly serve as persistent knowledge workers, with in-context memory - facts stored in the prompt - as the default strategy. We benchmark in-context memory against Knowledge Objects (KOs), discrete hash-addressed tuples with O(1) retrieval. Within the context window, Claude Sonnet 4.5 achieves 100% exact-match accuracy from 10 to 7,000 facts (97.5% of its 200K window). However, production deployment reveals three failure modes: capacity limits (prompts overflow at 8,0