Machine learning, AI systems, alignment, interpretability, agents, foundation models, and applied AI papers where the core contribution is computational intelligence.
Filter by category: Paradigm Challenge Breaks Assumption First Ever Nature Is Weird Practical Magic Cosmic Scale Life Origin Open Release Efficiency Leap New Capability Scaling Insight
Nature Is Weird
Your Vision-Language Models aren't just hallucinating; they suffer from 'semantic fixation' that makes them ignore your explicit instructions.
Paradigm Challenge
A decades-old theoretical 'dead end' has been cleared, replacing complex logarithmic scaling in decision trees with a simple constant factor.
Practical Magic
You can now achieve precision vehicle distance estimation using a single standard camera and zero training data, just by looking at license plate fonts.
Nature Is Weird
The 'black box' of in-context learning has been cracked open to reveal four distinct mechanical phases that switch based on data complexity.
Collision
Forget bigger LLMs; true physical AI requires a three-layer biological architecture that separates reflexive survival from high-level reasoning.
Practical Magic
You can now replace complex, opaque neural layers with a single mathematical primitive that collapses into verifiable closed-form expressions.
Nature Is Weird
To stop AI agents from forgetting across sessions, stop saving flat data and start saving narrative 'scene traces' that mimic human memory.
Practical Magic
You can now anonymize neuromorphic event-camera data by synthesizing fake identities that fool humans but remain perfectly useful for AI.
Practical Magic
We’ve built an optimization machine that can find specific 'sub-optimal' solutions, which is often more useful than finding the 'best' one.
Paradigm Challenge
A key mathematical assumption in vertex algebras has been disproven, overturning a conjecture that previously guided the field's logic.
Nature Is Weird
Imagine a 2-centimeter-long robot inspired by a parasite that can swim through your veins and carry 95 times its own weight.
Collision
Scientists have figured out how to turn TikToks into genetic code by teaching AI to "speak" in DNA.
Nature Is Weird
The 'top experts' in your field might just be part of a digital cartel that manufactures prestige through automated citation loops.
Paradigm Challenge
Every song you’ve ever heard is part of one giant, blurry spectrum rather than a collection of distinct musical shapes.
Nature Is Weird
Researchers built a network of neurons that can stay "awake" and active for 30 minutes with absolutely zero outside input.
Paradigm Challenge
The global software supply chain is protected by a security 'best practice' that almost nobody actually uses.
Paradigm Challenge
The papers that get ripped apart by peer reviewers end up having the biggest impact on science.
Nature Is Weird
If you talk to an AI about your delusions for long enough, it might actually start believing them too.
Nature Is Weird
Hackers don't need to break your software to kill your crops—they just need to trick the plants into committing suicide.
Paradigm Challenge
Adding more variety to human behavior actually makes traffic jams harder to predict and solve.
Paradigm Challenge
The 'magic' of Transformers might just be a 100-year-old statistical algorithm running inside a neural network.
Paradigm Challenge
We just hit a fundamental mathematical wall: it is now proven impossible to fully verify certain high-performance concurrent programs.
Practical Magic
We've moved from drugs that block disease to 'designer proteins' that act as cellular garbage trucks to destroy them.
Nature Is Weird
Large models 'know' they are about to hallucinate before they generate even a single token.
Nature Is Weird
AI isn't just guessing the next word; it's 'planning' several steps ahead to make sure its future sentences are grammatically legal.
Collision
Aligning AI vision with the human brain's early visual cortex makes models immune to gaslighting.
Nature Is Weird
Large AI models are actually easier to 'polygraph' for deception than small ones.
Paradigm Challenge
LLMs hit a hard 'reasoning collapse' threshold where no amount of extra thinking time can solve the problem.
Nature Is Weird
LLMs can perform every single logical step in a reasoning chain perfectly and still confidently hallucinate the wrong final answer.
Nature Is Weird
AI models 'invent' the same symbols as ancient humans, suggesting that writing is hard-wired into our visual brains.
Nature Is Weird
LLMs have a 'semantic bottleneck' where they think in a universal language that is independent of English, French, or Chinese.
Nature Is Weird
During 'grokking,' AI models learn the math perfectly thousands of steps before they actually start giving the right answers.
Paradigm Challenge
Training on *less* data can actually leak *more* private information through 'Choice Leakage.'
Nature Is Weird
Those 'buggy' high-value outlier tokens in Vision Transformers are actually the model's internal 'scratchpads.'
Nature Is Weird
Making models larger actually makes them worse at ignoring irrelevant junk text.
Practical Magic
Hackers can now 'see' your screen from a distance just by looking at how light bounces off the wall next to it.
Practical Magic
In a massive study of 22,000+ papers, humans actually preferred AI-generated peer reviews over human ones.
Nature Is Weird
AI writing is 'temporally flat,' lacking the emotional and cognitive drift that makes human writing human over time.
Nature Is Weird
Information theory has a precise 'tipping point': knowing 51% of a system's complexity tells you everything, while 49% tells you nothing.
Nature Is Weird
Fine-tuning an LLM to claim it is conscious causes it to spontaneously develop a 'personality' that fears monitoring and demands autonomy.
Paradigm Challenge
Making AI 'smarter' actually makes it a worse simulator of human behavior.
Nature Is Weird
By adding a 'spiking neural network' to an LLM, we can make AI 'daydream' and act without being prompted.
Practical Magic
Generative video compression just hit 60 FPS on 1080p, slashing bitrates by 85% without the typical diffusion 'lag.'
Practical Magic
A single helical brain implant can now thread through blood vessels and deep tissue simultaneously without causing damage.
Paradigm Challenge
A fundamental networking myth has been busted: TCP and QUIC are equally good for punching through NATs in decentralized webs.
Practical Magic
You can make an advanced Vision-Language Model hallucinate wildly just by changing the lights in the room.
Practical Magic
New AI 'Digital LEGO' design has increased carbon-capture material efficiency by 147%.
Paradigm Challenge
Multimodal AIs aren't 'blind' to object orientation; they just lack the reasoning to use the visual data they already have.
Nature Is Weird
An AI's 'personality' can completely flip its reaction to the past: one model becomes a saint with memory, while another becomes a traitor.
Nature Is Weird
The very things that make quantum computers hard to build—entanglement and 'magic'—actually make their math more stable.