To stop AI agents from forgetting across sessions, stop saving flat data and start saving narrative 'scene traces' that mimic human memory.
April 17, 2026
Original Paper
Drawing on Memory: Dual-Trace Encoding Improves Cross-Session Recall in LLM Agents
arXiv · 2604.12948
The Takeaway
Long-term recall in agents is notoriously buggy and context-blind. This research applies the human 'drawing effect'—the idea that reconstructing context aids memory—to LLMs using dual-trace encoding. By storing a narrative reconstruction alongside a fact, recall improves significantly over simple data retrieval. This unlocks agents that can maintain 'character' and context over weeks or months of interaction, finally solving the 'forgetful assistant' problem. It suggests that memory is about storytelling, not just storage.
From the abstract
LLM agents with persistent memory store information as flat factual records, providing little context for temporal reasoning, change tracking, or cross-session aggregation. Inspired by the drawing effect [3], we introduce dual-trace memory encoding. In this method, each stored fact is paired with a concrete scene trace, a narrative reconstruction of the moment and context in which the information was learned. The agent is forced to commit to specific contextual details during encoding, creating