Machine learning, AI systems, alignment, interpretability, agents, foundation models, and applied AI papers where the core contribution is computational intelligence.
Filter by category: Paradigm Challenge Breaks Assumption First Ever Nature Is Weird Practical Magic Cosmic Scale Life Origin Open Release Efficiency Leap New Capability Scaling Insight
Efficiency Breakthrough
Reduces long-context inference latency by 26.4x using a training-free, structure-aware prompt compression framework.
New Capability
Boosts open-model agent performance on web navigation tasks from 6.4% to 43%, surpassing proprietary models like GPT-4o.
Breaks Assumption
Proves that intuitive task similarity is a poor predictor of training data value for MLLMs and offers a highly accurate training-free alternative.
Paradigm Shift
Enables zero-shot humanoid robot interaction by generating robot-centric 'dream' videos instead of relying on human-to-robot motion retargeting.
Efficiency Breakthrough
Introduces the first reinforcement learning framework to compress implicit reasoning steps in looped language models.
Paradigm Shift
Replaces fixed context compression ratios with a performance-floor constraint to ensure reliable LLM deployment.
Efficiency Breakthrough
Achieves O(1) time complexity for dense component attribution in SwiGLU Transformers using a single forward-backward pass.
New Capability
First unified pipeline to reconstruct complete geometry, materials, and lighting from sparse views in under one second.
New Capability
Introduces the first inherently scalable primitive for radiance fields, allowing real-time Level-of-Detail (LOD) rendering by simply truncating Fourier coefficients.
Paradigm Shift
FIPO overcomes reasoning length stagnation in LLMs by using Future-KL divergence to create dense rewards, extending Chain-of-Thought lengths to over 10,000 tokens.
Efficiency Breakthrough
A training-free method to fix intra-modal misalignment in CLIP by decomposing projectors into an isotropic aligned subspace.
Efficiency Breakthrough
NASimJax provides a 100x throughput increase for autonomous penetration testing simulators by reimplementing the environment in JAX.
New Capability
SCRL introduces the first negative supervision mechanism for Test-Time Reinforcement Learning, preventing LLMs from reinforcing 'consensus lies'.
Efficiency Breakthrough
SAGE achieves state-of-the-art translation for low-resource languages while reducing training data requirements by 97.1% via RL-guided curation.
Efficiency Breakthrough
Memori reduces agent token costs by 20x by replacing raw conversation history with a persistent layer of semantic triples and summaries.
Efficiency Breakthrough
2K Retrofit enables 2K-resolution inference for any 3D geometric foundation model without modifying or retraining the backbone.
New Capability
X-World is a controllable, action-conditioned multi-camera world model that simulates realistic future video observations for end-to-end driving.
Paradigm Shift
Breaking the 'capability ceiling' in LLM post-training by replacing full-history dependencies with explicit Markov states.
Efficiency Breakthrough
A k-means variant that is up to 7x faster than FAISS and Scikit-Learn on CPUs and 4x faster than cuVS on GPUs.
Efficiency Breakthrough
Reduces the computational cost of Neural Architecture Search for ensembles from O(M) to O(1).
New Capability
Enables LLMs to explore beyond their current distribution during RL by treating failed trajectories as hindsight guidance.
Paradigm Shift
Identifies 'critical times' in diffusion generation where targeted guidance pulses significantly improve image control.
Breaks Assumption
Exposes fundamental flaws in using LLM-based agents to evaluate automated interpretability and model circuits.
New Capability
Replaces unstable free-form recursive LLM code with a typed functional runtime grounded in lambda-calculus.
Paradigm Shift
Derives a variational ELBO for the Joint-Embedding Predictive Architecture (JEPA), unifying it with generative modeling.
New Capability
Enables zero-shot, directed protein generation by applying a simple scalar bias to stochastic attention samplers.
Breaks Assumption
Demonstrates that LLM reasoning capabilities drop sharply when tasks are framed within multi-turn dialogues vs isolated benchmarks.
New Capability
A comprehensive end-to-end workflow for humanoid loco-manipulation that standardizes sim-to-real transfer.
Efficiency Breakthrough
Quantifies LLM uncertainty in a single generation pass without auxiliary models or repeated sampling.
Breaks Assumption
Demonstrates that current 'faithfulness' metrics for Chain-of-Thought reasoning are highly subjective and vary wildly depending on the choice of classifier.
Efficiency Breakthrough
Introduces a long-horizon video agent that uses 93% fewer frames than GPT-5/standalone LMMs while achieving higher accuracy.
Efficiency Breakthrough
Provides a robust method for distilling discrete diffusion models that maintains quality and diversity even with very few sampling steps.
Breaks Assumption
Reveals that 'learned priors' in inverse problems often behave as simple lookup tables that memorize training data rather than learning distributions.
Paradigm Shift
Integrates Kolmogorov-Arnold Networks (KANs) into causal generative modeling to produce human-readable symbolic structural equations.
New Capability
An autonomous AI agent that executes end-to-end theoretical and computational physics research, including hypothesis testing and discovery.
Cosmic Scale
Low-orbit satellites just got scary good—they can pinpoint your location within an inch in basically a heartbeat.
Practical Magic
Imagine a cell tower on wheels that literally follows you around with a camera just to make sure your bars never drop.
Nature Is Weird
After 90 years of scratching their heads, mathematicians finally proved that 'Quantum Logic' isn't just a mess—it actually works.
Paradigm Challenge
Perfectly syncing clocks across the world is actually impossible because of physics, so things like Leap Seconds are basically just a polite lie.
Breaks Assumption
Large Language Models can perfectly reconstruct training data they are strictly aligned to never express in standard generation.
Efficiency Breakthrough
MineDraft achieves a 75% throughput increase in speculative decoding by overlapping the drafting and verification stages.
Paradigm Shift
A geometric fix for Rotary Positional Embeddings (RoPE) allows Transformers to generalize to long inputs out-of-the-box by preserving 'sink token' functionality.
New Capability
Engineered modularity via per-layer supervision solves the 'Hydra effect,' allowing for the surgical control of specific model behaviors.
Breaks Assumption
Naive multi-agent routing based on self-reported quality scores results in a 'provenance paradox' that performs worse than random selection.
New Capability
NANOZK enables verifiable LLM inference with 70x smaller proofs and 24ms verification time using a novel layerwise decomposition.
Scaling Insight
Extreme neural network sparsification causes a catastrophic interpretability collapse even when global accuracy remains stable.
Paradigm Shift
A synthesizable RTL implementation of Predictive Coding allows for fully distributed, non-backprop learning directly in hardware.
Paradigm Shift
Dynamic constraints using an 'online refiner' resolve the conflict between stability and performance in Reinforcement Learning Fine-Tuning (RFT).
Efficiency Breakthrough
Q-Drift corrects quantization-induced noise in diffusion models using a plug-and-play sampler adjustment that requires only 5 calibration runs.
Efficiency Breakthrough
Achieves depth-independent training memory bounded to approximately twice the inference footprint.