AI & ML Nature Is Weird

AI writing is 'temporally flat,' lacking the emotional and cognitive drift that makes human writing human over time.

April 16, 2026

Original Paper

Temporal Flattening in LLM-Generated Text: Comparing Human and LLM Writing Trajectories

arXiv · 2604.12097

The Takeaway

While we've focused on word-level AI detection, this paper identifies a much deeper signal: 'temporal flattening.' Human perspective and style naturally evolve over the course of a long document or years of writing, but AI remains eerily consistent. This lack of 'drift' allows for a staggering 94%+ accuracy in distinguishing AI from human text. It reveals a fundamental limitation in current AI: its inability to simulate a developing personality. For practitioners, this is a powerful new tool for digital forensics and content authentication. It suggests that to truly 'pass' as human, an AI would need to simulate the messy, time-dependent evolution of a human mind.

From the abstract

Large language models (LLMs) are increasingly used in daily applications, from content generation to code writing, where each interaction treats the model as stateless, generating responses independently without memory. Yet human writing is inherently longitudinal: authors' styles and cognitive states evolve across months and years. This raises a central question: can LLMs reproduce such temporal structure across extended time periods? We construct and publicly release a longitudinal dataset of