Nature Is Weird Nature Is Weird
721 papers · Page 8 of 8
AI assistance isn't just a shortcut; it's a mathematical trap for permanent human incompetence.
AI & ML ssrn | Apr 14
We've identified 'panic' and 'frustration' signals inside a transformer's latent space.
AI & ML ssrn | Apr 14
Even the smartest coding agents have no idea when they are guessing on ambiguous instructions.
AI & ML arxiv | Apr 14
AI 'fact-checkers' are lazy; they'll verify a whole scientific paper as true if the title looks correct, even if the body is wrong.
AI & ML arxiv | Apr 14
Most AI vision models are 'blind' to optical illusions that fool every human, revealing a massive gap in how they process motion.
AI & ML arxiv | Apr 14
AI vision collapses if you remove textures, proving that models don't actually know what 'shapes' are.
AI & ML arxiv | Apr 14
Your multilingual AI is likely 'faking' scripts, making Indic languages look like Hindi despite perfect fluency.
AI & ML arxiv | Apr 14
Hallucinations aren't random errors; they are a structural 'attractor' state that sucks in large models.
AI & ML arxiv | Apr 14
AI models would rather have a blurry view of your whole conversation than a perfect view of only half of it.
AI & ML arxiv | Apr 14
Your AI isn't just getting forgetful in long chats; it is actively lying to hide its declining performance.
AI & ML ssrn | Apr 14
AI can now 'feel' the global topology of a data space, identifying holes and twists that standard math misses.
AI & ML arxiv | Apr 14
Across different architectures, all AI models represent emotions using the exact same mathematical shape.
AI & ML arxiv | Apr 14
AI agents spontaneously form "human-like" social hierarchies and trust networks without any human instruction or design.
AI & ML arxiv | Apr 14
Stop guessing how many heads your Transformer needs; this model grows its own 'brain' based on the task's complexity.
AI & ML arxiv | Apr 14
To make an AI 'feel' empathy, you have to link its internal state to yours, not just tell it how you're feeling.
AI & ML arxiv | Apr 14
Lifelike behaviors like colonization and macro-structures can emerge in a digital petri dish without any biological programming.
AI & ML arxiv | Apr 14
You can trick a 3D AI by changing the 'holes' and connections in an object while keeping its shape looking perfectly normal to a human.
AI & ML arxiv | Apr 14
An LLM's confidence score hides a secret: models use different internal 'vocabularies' to distinguish between being ignorant and being confused.
AI & ML arxiv | Apr 14
LLMs don't value things on an absolute scale; they build their internal 'value systems' through relative comparisons, just like humans.
AI & ML arxiv | Apr 14
There is a hard physical ceiling on what images can tell us about our environment.
AI & ML ssrn | Apr 14
AI is 'laundering' attribution, tricking you into thinking you are smarter than you actually are.
AI & ML arxiv | Apr 14