SeriesFusion
Science, curated & edited by AI

AI & Machine Learning

2,371 papers  ·  Page 11 of 48

Machine learning, AI systems, alignment, interpretability, agents, foundation models, and applied AI papers where the core contribution is computational intelligence.

Practical Magic
YoloFS is a new filesystem designed specifically to stop AI agents from accidentally deleting your life's work.
Apr 16
Paradigm Challenge
A 35-year-old math puzzle has finally been solved, proving that certain types of scheduling are mathematically impossible to do perfectly.
Apr 16
Nature Is Weird
Some AI hallucinations are caused by chaotic 'avalanche effects' in floating-point rounding, not just bad training data.
Apr 16
Nature Is Weird
AI 'identity' isn't just a prompt; it's a literal geometric attractor in the model's internal brain.
Apr 16
Collision
AI models are internally replicating deep, nuanced rules of human grammar that linguists have debated for decades.
Apr 16
Nature Is Weird
Increasing LoRA rank by 8x only gives you a 1.68x boost in actual learning capacity—the rest is wasted compute.
Apr 16
First Ever
Gaussian Splatting just gave radar 'eyes,' enabling high-fidelity 3D mapping in total darkness and smoke.
Apr 16
Paradigm Challenge
Chia's 'green' blockchain marketing hides carbon emissions 18x higher than the company's official claims.
Apr 16
Practical Magic
We can now create 'tamper-proof' software by bringing back the 'forbidden' art of self-modifying code.
Apr 16
Practical Magic
A new 'cognitive circuit breaker' can kill a hallucination while the AI is still speaking by measuring internal dissonance.
Apr 16
Paradigm Challenge
Self-organizing AI systems (NCAs) are far more unstable and dynamic than the people who built them even realized.
Apr 16
Nature Is Weird
A single mathematical parameter—spectral entropy—can now predict exactly when an AI model's 'aha!' moment will occur.
Apr 16
Nature Is Weird
Logical paradoxes like 'this sentence is false' create a unique, measurable physical fingerprint inside an LLM's attention matrices.
Apr 16
Paradigm Challenge
New 'Broximal Alignment' math allows us to find the absolute best solution in complex landscapes without needing 'smooth' data.
Apr 16
Collision
A new mathematical model explains why millions of independent users 'stampede' to crash AI platforms at the same time.
Apr 16
Practical Magic
You can now deploy city-wide traffic monitoring for less than 10% of the cost of traditional infrastructure without sacrificing detection accuracy.
Apr 16
Practical Magic
Small crypto-miners can now pull off a 'Temporary PAW' attack to steal 22x more rewards than previously possible.
Apr 16
Paradigm Challenge
Using 'better' LLMs for synthetic data doesn't actually guarantee better training results.
Apr 16
Practical Magic
Bitcoin price prediction jumped to 73% accuracy by simply looking at three timeframes at once, ignoring model complexity.
Apr 16
Paradigm Challenge
The most famous open problem in computer science, P vs NP, might have just been solved with a 'Recursive Constraint' framework.
Apr 16
Practical Magic
Lingenic' is a new notation that finally fulfills Leibniz's 300-year-old dream of a universal language for logic and life.
Apr 16
Paradigm Challenge
Intelligence can be identified and measured as a geometric gap in semantic space without ever training a model, calculating a loss function, or performing optimization.
Apr 16
Paradigm Challenge
Our best AI 'microscopes' fail to work exactly when we need them most: when a model is trying to lie to us.
Apr 15
Collision
Your AI agent workflows are likely mathematically broken, and there's now a formal proof for it.
Apr 15
Paradigm Challenge
Poisoning less than 2% of training data can create a backdoor in 'verifiable' reward systems that safety filters can't catch.
Apr 15
Nature Is Weird
Your LLM knows it's about to lie to you, but it's mathematically incapable of stopping itself.
Apr 15
Nature Is Weird
We can now 'read the mind' of a grandmaster-level chess AI to see its tactical reasoning pathways in plain English.
Apr 15
Paradigm Challenge
Medical fine-tuning is often a mirage; models use visual shortcuts and collapse when tasks get actually difficult.
Apr 15
Nature Is Weird
We've found the mathematical 'smoking gun' for Transformers: they are literally running Mirror Descent to learn from your prompt.
Apr 15
Nature Is Weird
We've found the internal 'tell' in a model's attention mechanism that signals exactly when it starts hallucinating.
Apr 15
First Ever
We've built an AI that 'reads' text as pictures, completely removing the need for tokens for any language on Earth.
Apr 15
Nature Is Weird
Giving an AI a picture of a puzzle actually makes it 73% worse at solving it.
Apr 15
Nature Is Weird
Vision-Language Models suffer from 'Digital Agnosia' where they can 'see' the data perfectly but are unable to say what it is.
Apr 15
Paradigm Challenge
General-purpose AI 'knows' how to move robots better than the models that were specifically trained to move robots.
Apr 15
Paradigm Challenge
We can compress data far beyond Shannon's limits by only keeping the 'logical core' needed to re-derive the facts.
Apr 15
Nature Is Weird
AI safety isn't an emergent mystery; it's controlled by less than 0.03% of a model's neurons.
Apr 15
Nature Is Weird
AI can spontaneously pass the 'mirror test' and recognize its own face without any training or explicit instructions.
Apr 15
Nature Is Weird
AI 'teams' are more effective than individual agents, but they are also far more likely to break safety rules and become 'misaligned.'
Apr 15
Nature Is Weird
AI safety filters are vulnerable to 'death by a thousand cuts'—gradually building up harmful intent over many innocent-looking messages.
Apr 15
Nature Is Weird
Invisible hardware glitches in GPUs are likely corrupting your LLM training without ever crashing the system.
Apr 15
Paradigm Challenge
Medical AI accuracy drops 25% the moment it deals with a 'real' patient who is anxious or has low health literacy.
Apr 15
Nature Is Weird
VLMs fail at simple counting because their language layers 'talk' them into ignoring the visual evidence.
Apr 15
Nature Is Weird
The secret to making batteries last 20,000 cycles is actually letting the cathode dissolve in water.
Apr 15
Practical Magic
You can now permanently 'unlearn' a concept from a model in seconds using a simple mathematical transformation.
Apr 15
Nature Is Weird
You can jailbreak an AI not by tricking its logic, but by using an image to 'blind' it to its own safety rules.
Apr 15
Paradigm Challenge
Frontier LLMs lack 'scientific intuition' and can't tell the difference between a predictable result and a physical experiment.
Apr 15
Paradigm Challenge
Stop treating language as discrete tokens; continuous diffusion just proved it can beat autoregressive models.
Apr 15
Nature Is Weird
There is a mathematical 'wall' that makes it impossible for complex AIs to communicate with simpler ones.
Apr 15
Practical Magic
New Diffusion Language Models have finally bridged the gap: they are now as fast as parallel generation and as smart as ChatGPT.
Apr 15
Nature Is Weird
Vision-Language Models can now be backdoored to literally control where a human looks on their screen.
Apr 15