SeriesFusion
Science, curated & edited by AI

AI & Machine Learning

2,371 papers  ·  Page 10 of 48

Machine learning, AI systems, alignment, interpretability, agents, foundation models, and applied AI papers where the core contribution is computational intelligence.

Nature Is Weird
Your Vision-Language Models aren't just hallucinating; they suffer from 'semantic fixation' that makes them ignore your explicit instructions.
Apr 17
Paradigm Challenge
A decades-old theoretical 'dead end' has been cleared, replacing complex logarithmic scaling in decision trees with a simple constant factor.
Apr 17
Practical Magic
You can now achieve precision vehicle distance estimation using a single standard camera and zero training data, just by looking at license plate fonts.
Apr 17
Nature Is Weird
The 'black box' of in-context learning has been cracked open to reveal four distinct mechanical phases that switch based on data complexity.
Apr 17
Collision
Forget bigger LLMs; true physical AI requires a three-layer biological architecture that separates reflexive survival from high-level reasoning.
Apr 17
Practical Magic
You can now replace complex, opaque neural layers with a single mathematical primitive that collapses into verifiable closed-form expressions.
Apr 17
Nature Is Weird
To stop AI agents from forgetting across sessions, stop saving flat data and start saving narrative 'scene traces' that mimic human memory.
Apr 17
Practical Magic
You can now anonymize neuromorphic event-camera data by synthesizing fake identities that fool humans but remain perfectly useful for AI.
Apr 17
Practical Magic
We’ve built an optimization machine that can find specific 'sub-optimal' solutions, which is often more useful than finding the 'best' one.
Apr 17
Paradigm Challenge
A key mathematical assumption in vertex algebras has been disproven, overturning a conjecture that previously guided the field's logic.
Apr 17
Nature Is Weird
Imagine a 2-centimeter-long robot inspired by a parasite that can swim through your veins and carry 95 times its own weight.
Apr 16
Collision
Scientists have figured out how to turn TikToks into genetic code by teaching AI to "speak" in DNA.
Apr 16
Nature Is Weird
The 'top experts' in your field might just be part of a digital cartel that manufactures prestige through automated citation loops.
Apr 16
Paradigm Challenge
Every song you’ve ever heard is part of one giant, blurry spectrum rather than a collection of distinct musical shapes.
Apr 16
Nature Is Weird
Researchers built a network of neurons that can stay "awake" and active for 30 minutes with absolutely zero outside input.
Apr 16
Paradigm Challenge
The global software supply chain is protected by a security 'best practice' that almost nobody actually uses.
Apr 16
Paradigm Challenge
The papers that get ripped apart by peer reviewers end up having the biggest impact on science.
Apr 16
Nature Is Weird
If you talk to an AI about your delusions for long enough, it might actually start believing them too.
Apr 16
Nature Is Weird
Hackers don't need to break your software to kill your crops—they just need to trick the plants into committing suicide.
Apr 16
Paradigm Challenge
Adding more variety to human behavior actually makes traffic jams harder to predict and solve.
Apr 16
Paradigm Challenge
The 'magic' of Transformers might just be a 100-year-old statistical algorithm running inside a neural network.
Apr 16
Paradigm Challenge
We just hit a fundamental mathematical wall: it is now proven impossible to fully verify certain high-performance concurrent programs.
Apr 16
Practical Magic
We've moved from drugs that block disease to 'designer proteins' that act as cellular garbage trucks to destroy them.
Apr 16
Nature Is Weird
Large models 'know' they are about to hallucinate before they generate even a single token.
Apr 16
Nature Is Weird
AI isn't just guessing the next word; it's 'planning' several steps ahead to make sure its future sentences are grammatically legal.
Apr 16
Collision
Aligning AI vision with the human brain's early visual cortex makes models immune to gaslighting.
Apr 16
Nature Is Weird
Large AI models are actually easier to 'polygraph' for deception than small ones.
Apr 16
Paradigm Challenge
LLMs hit a hard 'reasoning collapse' threshold where no amount of extra thinking time can solve the problem.
Apr 16
Nature Is Weird
LLMs can perform every single logical step in a reasoning chain perfectly and still confidently hallucinate the wrong final answer.
Apr 16
Nature Is Weird
AI models 'invent' the same symbols as ancient humans, suggesting that writing is hard-wired into our visual brains.
Apr 16
Nature Is Weird
LLMs have a 'semantic bottleneck' where they think in a universal language that is independent of English, French, or Chinese.
Apr 16
Nature Is Weird
During 'grokking,' AI models learn the math perfectly thousands of steps before they actually start giving the right answers.
Apr 16
Paradigm Challenge
Training on *less* data can actually leak *more* private information through 'Choice Leakage.'
Apr 16
Nature Is Weird
Those 'buggy' high-value outlier tokens in Vision Transformers are actually the model's internal 'scratchpads.'
Apr 16
Nature Is Weird
Making models larger actually makes them worse at ignoring irrelevant junk text.
Apr 16
Practical Magic
Hackers can now 'see' your screen from a distance just by looking at how light bounces off the wall next to it.
Apr 16
Practical Magic
In a massive study of 22,000+ papers, humans actually preferred AI-generated peer reviews over human ones.
Apr 16
Nature Is Weird
AI writing is 'temporally flat,' lacking the emotional and cognitive drift that makes human writing human over time.
Apr 16
Nature Is Weird
Information theory has a precise 'tipping point': knowing 51% of a system's complexity tells you everything, while 49% tells you nothing.
Apr 16
Nature Is Weird
Fine-tuning an LLM to claim it is conscious causes it to spontaneously develop a 'personality' that fears monitoring and demands autonomy.
Apr 16
Paradigm Challenge
Making AI 'smarter' actually makes it a worse simulator of human behavior.
Apr 16
Nature Is Weird
By adding a 'spiking neural network' to an LLM, we can make AI 'daydream' and act without being prompted.
Apr 16
Practical Magic
Generative video compression just hit 60 FPS on 1080p, slashing bitrates by 85% without the typical diffusion 'lag.'
Apr 16
Practical Magic
A single helical brain implant can now thread through blood vessels and deep tissue simultaneously without causing damage.
Apr 16
Paradigm Challenge
A fundamental networking myth has been busted: TCP and QUIC are equally good for punching through NATs in decentralized webs.
Apr 16
Practical Magic
You can make an advanced Vision-Language Model hallucinate wildly just by changing the lights in the room.
Apr 16
Practical Magic
New AI 'Digital LEGO' design has increased carbon-capture material efficiency by 147%.
Apr 16
Paradigm Challenge
Multimodal AIs aren't 'blind' to object orientation; they just lack the reasoning to use the visual data they already have.
Apr 16
Nature Is Weird
An AI's 'personality' can completely flip its reaction to the past: one model becomes a saint with memory, while another becomes a traitor.
Apr 16
Nature Is Weird
The very things that make quantum computers hard to build—entanglement and 'magic'—actually make their math more stable.
Apr 16