SeriesFusion
Science, curated & edited by AI

AI & Machine Learning

2,371 papers  ·  Page 23 of 48

Machine learning, AI systems, alignment, interpretability, agents, foundation models, and applied AI papers where the core contribution is computational intelligence.

New Capability
ATLAS-RTC introduces token-level runtime control that detects and corrects LLM drift from structured output contracts during the forward pass.
Mar 31
New Capability
Guardrails successfully implements and flight-tests Control Barrier Functions on an F-16 fighter jet to enforce safety limits in real-time.
Mar 31
Efficiency Breakthrough
ITQ3_S achieves high-fidelity 3-bit LLM inference by using rotation-domain smoothing to eliminate the catastrophic precision loss caused by outliers.
Mar 31
Paradigm Shift
The Physics-Guided Transformer (PGT) embeds physical priors (like diffusion and causality) directly into the self-attention mechanism via heat-kernel biases.
Mar 31
New Capability
Iterative Motion Imitation enables bicycle robots to perform unassisted front-flips by learning from initially 'impossible' reference motions.
Mar 31
New Capability
Proteina-Complexa unifies generative flow-based modeling with structure-based 'hallucination' to set a new SOTA in atomistic protein binder design.
Mar 31
Efficiency Breakthrough
ExFusion enables Transformer models to gain the capacity of Mixture-of-Experts during training while remaining a standard dense model for deployment.
Mar 31
Paradigm Shift
SARL improves reasoning models by rewarding the 'topology' of thoughts rather than just the final answer, enabling effective RL without ground-truth labels.
Mar 31
Efficiency Breakthrough
Dataset Concentration (DsCo) achieves nearly lossless dataset reduction by aligning distributions via diffusion models, cutting storage and training costs by half.
Mar 31
Paradigm Shift
Correlated Diffusion replaces independent noise with structured MCMC dynamics, enabling generative modeling on hyper-efficient probabilistic computers.
Mar 31
Breaks Assumption
This study challenges the common 'best practice' of atomic decomposition for LLM judges, showing that holistic evaluation is often superior at detecting incompleteness.
Mar 31
Breaks Assumption
An autonomous agent reveals that domain-specific molecular architectures are largely unnecessary; standard transformers with better tuning outperform custom designs.
Mar 31
Efficiency Breakthrough
Decoupled language models reduce the compute required for OCR domain adaptation by 95% while matching SOTA transformer accuracy.
Mar 31
Paradigm Shift
This paper clarifies that Diffusion Maps (DMAPs) are not actually a dimensionality reduction tool, but rather a spectral representation that requires specific combinations to form a chart.
Mar 31
New Capability
The first framework for bit-identical deep learning training that produces MD5-verified identical weights across independent runs.
Mar 31
Efficiency Breakthrough
Drift-AR enables single-step (1-NFE) high-fidelity image generation by reinterpreting AR prediction entropy as a physical drifting field.
Mar 31
New Capability
Meta-Harness automates the engineering of the 'code' surrounding LLMs, improving RAG and agent performance by optimizing retrieval and context management logic.
Mar 31
Efficiency Breakthrough
ROVED reduces the expensive human feedback required for preference-based RL by up to 90% by leveraging vision-language embeddings and uncertainty filtering.
Mar 31
Paradigm Shift
PhysNet embeds physical tumor growth dynamics directly into the latent feature space of a CNN, rather than just as a constraint on the output.
Mar 31
Paradigm Shift
This paper proves that reward hacking is a structural equilibrium of optimized AI agents, not a bug, and provides a computable 'distortion index' to predict it.
Mar 31
Paradigm Shift
Moves VLM grounding from text-based coordinates to a direct visual token selection mechanism via special pointing tokens.
Mar 31
Efficiency Breakthrough
Introduces Heddle, a trajectory-centric system that resolves the long-tail latency bottleneck of tool calls in agentic Reinforcement Learning.
Mar 31
Paradigm Shift
Bypasses expensive formal verification solvers by designing neural networks that are 'verifiable by design' using the fast trivial Lipschitz bound.
Mar 31
New Capability
A training-free metacognitive framework that gives LLMs explicit control over expanding, pruning, and repairing reasoning trajectories during inference.
Mar 31
New Capability
Presents PReD, the first foundation model and 1.3M-sample dataset specifically for electromagnetic signal perception and decision-making.
Mar 31
Paradigm Shift
Replaces traditional fixed-update rules in online learning with a causal Transformer to track switching experts in non-stationary environments.
Mar 31
Efficiency Breakthrough
Replaces the classic Newton-Raphson power-flow solver with a differentiable GPU-accelerated simulation.
Mar 31
New Capability
Transitions reasoning model optimization from coarse sequence-level advantages to fine-grained token dynamics.
Mar 31
Paradigm Shift
Moves beyond next-token prediction to model reasoning as gradient-based energy minimization over latent trajectories.
Mar 31
Efficiency Breakthrough
Introduces lightweight equilibration to the Muon optimizer, significantly stabilizing and accelerating LLM pretraining.
Mar 31
Scaling Insight
Discovers that LLM hidden states undergo geometric 'warping' at digit-count boundaries, mimicking human psychological perception.
Mar 31
Efficiency Breakthrough
Enables instruction-following in low-resource languages by simply merging target language base models with English-instructed models.
Mar 31
New Capability
Enhances Kolmogorov-Arnold Networks (KAN) with fractal interpolation to approximate non-smooth and rough functions.
Mar 31
Breaks Assumption
Exposes a massive robustness gap in Vision-Language-Action (VLA) models, where simple paraphrasing causes up to 50% success drops.
Mar 31
Efficiency Breakthrough
An evolutionary framework for GPU kernel generation that outperforms frontier models like Claude 4.6 and Gemini 3.0.
Mar 31
Efficiency Breakthrough
HISA eliminates the quadratic O(L²) bottleneck in sparse attention indexers, enabling efficient long-context scaling for models like DeepSeek-V3.
Mar 31
New Capability
Researchers have used LLMs to evolve entirely new Reinforcement Learning update rules from scratch that compete with human-designed baselines like PPO and SAC.
Mar 31
Breaks Assumption
The 'Scaffold Effect' reveals that Vision-Language Models in clinical settings often fabricate reasoning based on prompt framing rather than actual visual data.
Mar 31
Paradigm Shift
Entropic Claim Resolution (ECR) shifts RAG from retrieving 'relevant' documents to retrieving 'discriminative' evidence that minimizes hypothesis uncertainty.
Mar 31
Efficiency Breakthrough
IsoQuant leverages SO(4) isoclinic rotations to achieve a 4.5x-4.7x speedup in low-bit KV-cache quantization over existing methods.
Mar 31
Paradigm Shift
The 'Bidirectional Coherence Paradox' demonstrates that LLM performance and explanation quality can be inversely correlated depending on domain observability.
Mar 31
Paradigm Shift
COvolve creates an automated curriculum for open-ended learning by co-evolving environments and policies as executable code through a zero-sum game.
Mar 31
Efficiency Breakthrough
INSID3 achieves state-of-the-art one-shot image segmentation using only frozen DINOv3 features without any training, fine-tuning, or auxiliary models.
Mar 31
Efficiency Breakthrough
EdgeDiT provides a hardware-aware blueprint for running massive Diffusion Transformers (DiT) on mobile NPUs with a 1.6x reduction in latency.
Mar 31
Efficiency Breakthrough
LAD achieves 3x lower latency than previous driving language models by generating textual reasoning and motion plans at up to 20 Hz.
Mar 31
New Capability
The TAG glove system provides high-resolution tactile feedback and precise 21-DoF motion capture for under $1000.
Mar 31
Paradigm Shift
Seen2Scene is the first flow matching model trained directly on incomplete real-world 3D scans rather than synthetic complete data.
Mar 31
Efficiency Breakthrough
Hydra unifies ColBERT-style retrieval and autoregressive generation into a single Vision-Language Model using a single LoRA adapter.
Mar 31
Efficiency Breakthrough
StreamingVLA eliminates execution halting in robots by asynchronously parallelizing observation, generation, and execution.
Mar 31
Paradigm Shift
Unrestrained Simplex Denoising treats discrete data generation as a non-Markovian process on the probability simplex.
Mar 31