Intelligence can be identified and measured as a geometric gap in semantic space without ever training a model, calculating a loss function, or performing optimization.
April 16, 2026
Original Paper
Observation Without Optimization: Dual-Space Surplus and Self-Reading Convergence in Compositional Semantics
SSRN · 6286498
The Takeaway
Modern AI is built on the dogmatic assumption that intelligence must be 'minimized' into existence via gradient descent and loss functions. This paper challenges that paradigm by demonstrating that intelligence can be observed as a 'surplus' gap between structural linguistic composition and semantic meaning. Instead of training, it uses a dual-space semantic infrastructure to identify intelligence through self-reading convergence. This suggests that semantic intelligence is an emergent property of structure that can be detected geometrically rather than manufactured through brute-force compute. It opens the door for a new class of 'zero-training' semantic observers that could evaluate reasoning without the overhead of massive GPU clusters. This fundamentally changes how we might architect the next generation of non-connectionist AI systems.
From the abstract
We present Habitat, a semantic infrastructure that observes intelligence in the divergence between compositional structure and semantic meaning-without optimization, training, or loss functions. Natural language is decomposed into a 17-dimensional compositional vector space derived from linguistic theory (Bach/Vendler eventuality classification, Levin verb classification, proto-role analysis) and simultaneously embedded in a 768-dimensional semantic space via SentenceTransformer. These two indep