Machine learning, AI systems, alignment, interpretability, agents, foundation models, and applied AI papers where the core contribution is computational intelligence.
Filter by category: Paradigm Challenge Breaks Assumption First Ever Nature Is Weird Practical Magic Cosmic Scale Life Origin Open Release Efficiency Leap New Capability Scaling Insight
Practical Magic
YoloFS is a new filesystem designed specifically to stop AI agents from accidentally deleting your life's work.
Paradigm Challenge
A 35-year-old math puzzle has finally been solved, proving that certain types of scheduling are mathematically impossible to do perfectly.
Nature Is Weird
Some AI hallucinations are caused by chaotic 'avalanche effects' in floating-point rounding, not just bad training data.
Nature Is Weird
AI 'identity' isn't just a prompt; it's a literal geometric attractor in the model's internal brain.
Collision
AI models are internally replicating deep, nuanced rules of human grammar that linguists have debated for decades.
Nature Is Weird
Increasing LoRA rank by 8x only gives you a 1.68x boost in actual learning capacity—the rest is wasted compute.
First Ever
Gaussian Splatting just gave radar 'eyes,' enabling high-fidelity 3D mapping in total darkness and smoke.
Paradigm Challenge
Chia's 'green' blockchain marketing hides carbon emissions 18x higher than the company's official claims.
Practical Magic
We can now create 'tamper-proof' software by bringing back the 'forbidden' art of self-modifying code.
Practical Magic
A new 'cognitive circuit breaker' can kill a hallucination while the AI is still speaking by measuring internal dissonance.
Paradigm Challenge
Self-organizing AI systems (NCAs) are far more unstable and dynamic than the people who built them even realized.
Nature Is Weird
A single mathematical parameter—spectral entropy—can now predict exactly when an AI model's 'aha!' moment will occur.
Nature Is Weird
Logical paradoxes like 'this sentence is false' create a unique, measurable physical fingerprint inside an LLM's attention matrices.
Paradigm Challenge
New 'Broximal Alignment' math allows us to find the absolute best solution in complex landscapes without needing 'smooth' data.
Collision
A new mathematical model explains why millions of independent users 'stampede' to crash AI platforms at the same time.
Practical Magic
You can now deploy city-wide traffic monitoring for less than 10% of the cost of traditional infrastructure without sacrificing detection accuracy.
Practical Magic
Small crypto-miners can now pull off a 'Temporary PAW' attack to steal 22x more rewards than previously possible.
Paradigm Challenge
Using 'better' LLMs for synthetic data doesn't actually guarantee better training results.
Practical Magic
Bitcoin price prediction jumped to 73% accuracy by simply looking at three timeframes at once, ignoring model complexity.
Paradigm Challenge
The most famous open problem in computer science, P vs NP, might have just been solved with a 'Recursive Constraint' framework.
Practical Magic
Lingenic' is a new notation that finally fulfills Leibniz's 300-year-old dream of a universal language for logic and life.
Paradigm Challenge
Intelligence can be identified and measured as a geometric gap in semantic space without ever training a model, calculating a loss function, or performing optimization.
Paradigm Challenge
Our best AI 'microscopes' fail to work exactly when we need them most: when a model is trying to lie to us.
Collision
Your AI agent workflows are likely mathematically broken, and there's now a formal proof for it.
Paradigm Challenge
Poisoning less than 2% of training data can create a backdoor in 'verifiable' reward systems that safety filters can't catch.
Nature Is Weird
Your LLM knows it's about to lie to you, but it's mathematically incapable of stopping itself.
Nature Is Weird
We can now 'read the mind' of a grandmaster-level chess AI to see its tactical reasoning pathways in plain English.
Paradigm Challenge
Medical fine-tuning is often a mirage; models use visual shortcuts and collapse when tasks get actually difficult.
Nature Is Weird
We've found the mathematical 'smoking gun' for Transformers: they are literally running Mirror Descent to learn from your prompt.
Nature Is Weird
We've found the internal 'tell' in a model's attention mechanism that signals exactly when it starts hallucinating.
First Ever
We've built an AI that 'reads' text as pictures, completely removing the need for tokens for any language on Earth.
Nature Is Weird
Giving an AI a picture of a puzzle actually makes it 73% worse at solving it.
Nature Is Weird
Vision-Language Models suffer from 'Digital Agnosia' where they can 'see' the data perfectly but are unable to say what it is.
Paradigm Challenge
General-purpose AI 'knows' how to move robots better than the models that were specifically trained to move robots.
Paradigm Challenge
We can compress data far beyond Shannon's limits by only keeping the 'logical core' needed to re-derive the facts.
Nature Is Weird
AI safety isn't an emergent mystery; it's controlled by less than 0.03% of a model's neurons.
Nature Is Weird
AI can spontaneously pass the 'mirror test' and recognize its own face without any training or explicit instructions.
Nature Is Weird
AI 'teams' are more effective than individual agents, but they are also far more likely to break safety rules and become 'misaligned.'
Nature Is Weird
AI safety filters are vulnerable to 'death by a thousand cuts'—gradually building up harmful intent over many innocent-looking messages.
Nature Is Weird
Invisible hardware glitches in GPUs are likely corrupting your LLM training without ever crashing the system.
Paradigm Challenge
Medical AI accuracy drops 25% the moment it deals with a 'real' patient who is anxious or has low health literacy.
Nature Is Weird
VLMs fail at simple counting because their language layers 'talk' them into ignoring the visual evidence.
Nature Is Weird
The secret to making batteries last 20,000 cycles is actually letting the cathode dissolve in water.
Practical Magic
You can now permanently 'unlearn' a concept from a model in seconds using a simple mathematical transformation.
Nature Is Weird
You can jailbreak an AI not by tricking its logic, but by using an image to 'blind' it to its own safety rules.
Paradigm Challenge
Frontier LLMs lack 'scientific intuition' and can't tell the difference between a predictable result and a physical experiment.
Paradigm Challenge
Stop treating language as discrete tokens; continuous diffusion just proved it can beat autoregressive models.
Nature Is Weird
There is a mathematical 'wall' that makes it impossible for complex AIs to communicate with simpler ones.
Practical Magic
New Diffusion Language Models have finally bridged the gap: they are now as fast as parallel generation and as smart as ChatGPT.
Nature Is Weird
Vision-Language Models can now be backdoored to literally control where a human looks on their screen.