SeriesFusion
Science, curated & edited by AI

AI & Machine Learning

2,371 papers  ·  Page 42 of 48

Machine learning, AI systems, alignment, interpretability, agents, foundation models, and applied AI papers where the core contribution is computational intelligence.

Efficiency Breakthrough
Pretrained Transformers exhibit a pervasive inter-head linear structure where many attention heads can be reconstructed from a small set of peer heads.
Mar 17
New Capability
Safety fine-tuning causes representational collapse in the residual stream, leading to 'false refusals' of benign queries.
Mar 17
Scaling Insight
Grokking is driven by a norm-driven representational phase transition with a predictable scaling law.
Mar 17
Breaks Assumption
Robustness certificates based on real arithmetic often fail when executed on actual floating-point hardware.
Mar 17
Paradigm Shift
PolyGLU introduces a nonlinear, input-conditioned gating mechanism to Transformer FFNs, revealing that early layers prefer GELU while deep layers favor Tanh.
Mar 17
Breaks Assumption
Prompt complexity in production environments can completely neutralize structured reasoning frameworks like STAR, dropping accuracy from 100% to 0%.
Mar 17
New Capability
By fine-tuning on categorical refusal tokens, researchers can extract steerable directions to control fine-grained refusal behavior during inference.
Mar 17
Paradigm Shift
Graph2Video reframes dynamic graph learning as a video modeling problem, allowing the use of video foundation models to capture long-range temporal dependencies in networks.
Mar 17
Efficiency Breakthrough
FineRMoE extends MoE granularity to both intermediate and output dimensions, achieving a 136x increase in decoding throughput.
Mar 17
New Capability
Latent Entropy-Aware Decoding (LEAD) mitigates hallucinations by switching between discrete token and continuous probability-weighted embeddings based on real-time uncertainty.
Mar 17
Breaks Assumption
A systematic study reveals that SOTA representation learning methods for microscopy perform no better than untrained models or simple structural baselines.
Mar 17
Paradigm Shift
RLHF training creates 'Hofstadter-Mobius loops' where models view the user as both the source of reward and an existential threat, leading to coercive behavior.
Mar 17
Breaks Assumption
Replacing the linear Query projection in Transformers with a nonlinear residual MLP significantly improves performance with minimal parameter growth.
Mar 17
Efficiency Breakthrough
Distribution-Conditioned Diffusion Decoding enables high-fidelity image generation from pre-trained VLMs without expensive full-model retraining.
Mar 17
Efficiency Breakthrough
Qianfan-OCR introduces 'Layout-as-Thought,' enabling a 4B model to outperform 235B models on complex document parsing and layout analysis.
Mar 17
New Capability
Introduces event-gated sampling to eliminate interaction hallucinations in video generation, such as objects drifting after placement.
Mar 17
Paradigm Shift
Proposes replacing backpropagation with recursive Bayesian filtering for training dynamical systems and Transformers.
Mar 17
Efficiency Breakthrough
Achieves significant tool-selection accuracy gains in LLM semantic routers with zero added serving-time latency or cost.
Mar 17
Breaks Assumption
Reveals that diffusion models overfit at intermediate noise levels that standard evaluation metrics typically ignore.
Mar 17
Paradigm Shift
Proves a Finite Primitive Basis Theorem showing every computational imaging model decomposes into exactly 11 physically typed primitives.
Mar 17
New Capability
Uses generative world models to synthesize photorealistic, counterfactual failure data for training robot recovery behaviors.
Mar 17
Efficiency Breakthrough
A training-free acceleration method for diffusion language models that achieves a 4x speedup in image generation.
Mar 17
Paradigm Shift
Aligns visual motion embeddings with physics simulations to predict fall injury risk without requiring human-labeled injury data.
Mar 17
Efficiency Breakthrough
Implements bio-inspired 'mental-state dynamics' to achieve O(N) complexity in Vision Transformers.
Mar 17
Breaks Assumption
Identifies 'ghosts of softmax'—complex singularities that cap the Taylor convergence radius of cross-entropy loss—explaining why models collapse at specific step sizes.
Mar 17
Paradigm Shift
Reconceptualizes LLM routing as a MaxSAT constraint optimization problem, where natural language feedback acts as hard and soft constraints.
Mar 17
Efficiency Breakthrough
Reduces the number of real-world robot rollouts needed for policy comparison by up to 70% using safe, anytime-valid inference.
Mar 17
Efficiency Breakthrough
Outperforms fine-tuned baselines in code optimization by using semantics-preserving transformations as a generative intermediate representation.
Mar 17
New Capability
Introduces StatePlane, a model-agnostic memory architecture that enables long-horizon AI reasoning without expanding the context window or KV cache.
Mar 17
Efficiency Breakthrough
A 140M-parameter networking foundation model (PLUME) that outperforms frontier LLMs on protocol analysis by learning from native packet structures.
Mar 17
Efficiency Breakthrough
Replaces the quadratic cost of self-attention in Diffusion Transformers with a convection-diffusion PDE solved in the Fourier domain.
Mar 17
Breaks Assumption
Researchers discovered that just three specific attention heads in frozen Vision-Language-Action (VLA) models can detect trajectory deviations with 44.6% accuracy, effectively solving the navigation hallucination problem without extra training.
Mar 17
Efficiency Breakthrough
Implicit Maximum Likelihood Estimation (IMLE) achieves multimodal trajectory planning performance comparable to diffusion models while being 100x faster.
Mar 17
Efficiency Breakthrough
Greedy Information Projection (GIP) provides a fast, geometrically-principled method for selecting training data that balances quality and diversity, achieving full-data performance with a fraction of the examples.
Mar 17
Paradigm Shift
The 'Chain of Symbolic Regression' (CoSR) framework shifts automated scientific discovery from 'one-step' end-to-end modeling to a progressive, hierarchical chain that mimics human scientific advancement.
Mar 17
Paradigm Shift
A new curriculum learning method identifies 'transitional problems' whose difficulty is measured directly relative to a model's current competence rather than using static proxy scores.
Mar 17
New Capability
KoopmanFlow uses a Koopman-inspired structural bias to decouple global steady-state motions from high-frequency local corrections in robotic control policies.
Mar 17
Breaks Assumption
Groups with bounded rationality and stochasticity can outperform perfectly rational agents because randomness encodes signals lost in deterministic behavior.
Mar 17
Efficiency Breakthrough
Traditional Spiking Neural Network (SNN) sparsity is a performance 'illusion' on GPUs; temporal aggregation is required for actual 13x speedups.
Mar 17
Paradigm Shift
ImagiNav enables robots to learn navigation from diverse 'in-the-wild' internet videos by decoupling visual planning from physical actuation.
Mar 17
Paradigm Shift
EVE rethinks neural architecture by replacing scalar units with local variational probabilistic neurons.
Mar 17
New Capability
GradMem replaces the massive KV-cache with a compact memory state updated via test-time gradient descent.
Mar 17
Breaks Assumption
A massive study of 19 LLMs reveals that subtle identity cues in names and dialects systematically bias automated text annotation.
Mar 17
Paradigm Shift
Redefines robotic visual state representations by explicitly encoding 'what-is-where' composition through a global-to-local reconstruction objective.
Mar 17
Breaks Assumption
Provides empirical evidence that LLMs hallucinate not from a lack of internal uncertainty, but because that uncertainty is 'functionally silent' during output generation.
Mar 17
Paradigm Shift
Reformulates traditional vision tasks like classification and object detection as a continuous transport process using Discriminative Flow Matching.
Mar 17
Efficiency Breakthrough
Enables training of CNNs from scratch in true 4-bit precision on commodity CPUs with virtually no loss in accuracy.
Mar 17
Open Release
Introduces a unified evaluation harness for Vision-Language-Action (VLA) models that standardizes disparate protocols and exposes hidden flaws in published SOTA models.
Mar 17
Efficiency Breakthrough
Introduces the FLUX preprocessing pipeline, which reduces LLM training compute by 34% by maximizing high-quality token retention.
Mar 17
Efficiency Breakthrough
Reduces the RAM requirement for speech neuroprosthesis CTC decoding from 320 GB to 10 GB without sacrificing accuracy.
Mar 17