AI & ML Paradigm Shift

Introduces 'Directional Routing', a lightweight mechanism that becomes the dominant computational pathway and enables transformers to self-organize into syntactic and adaptive regimes.

March 17, 2026

Original Paper

Directional Routing in Transformers

Kevin Taylor

arXiv · 2603.14923

The Takeaway

The paper shows that a tiny parameter addition (3.9%) can fundamentally change model behavior, where the router becomes more critical than the attention heads themselves. This challenges the 'more heads/layers is better' scaling paradigm in favor of coordinated routing.

From the abstract

We introduce directional routing, a lightweight mechanism that gives each transformer attention head learned suppression directions controlled by a shared router, at 3.9% parameter cost. We train a 433M-parameter model alongside an identical baseline in a single run, then trace the resulting circuits through mechanistic interpretability.Routing becomes the model's dominant computational pathway. Disabling it collapses factual recall to near-zero probability across all 8 test prompts and drops in