PARADIGM_SHIFT

PARADIGM_SHIFT

329 papers · Page 4 of 4

Switches the training objective from hard Next-Token Prediction to predicting 'concepts' (sets of semantically related tokens).

AI & ML arxiv | Apr 1

Proves that LLM agent capability (pass@1) and reliability (consistency) diverge systematically, with frontier models often having the highest 'meltdown' rates.

AI & ML arxiv | Apr 1

Learns stable, interpretable Koopman generators for nonlinear PDEs from trajectory data alone without any physics supervision.

AI & ML arxiv | Apr 1

Shows that VLMs can overcome deep-seated perceptual biases and optical illusions by using image manipulation tools rather than more training data.

AI & ML arxiv | Apr 1

A novel neural primitive based on metriplectic dynamics that outperforms Transformers in data efficiency and generalization.

AI & ML arxiv | Apr 1

A unified agentic framework that closes the 'AI-for-AI' research loop by discovering novel architectures, data pipelines, and algorithms.

AI & ML arxiv | Apr 1

Decouples high-level intent planning from low-level motor control in Vision-Language-Action (VLA) models to prevent the degradation of pre-trained VLM representations.

AI & ML arxiv | Apr 1

Demonstrates that independent aggregation (Hybrid Confirmation Tree) consistently outperforms the standard 'AI-as-advisor' paradigm across diverse high-stakes domains.

AI & ML arxiv | Apr 1

Shows that deep learning models for medical imaging (MRI) can be trained using synthetic quaternion Julia fractals instead of sensitive human clinical data.

AI & ML arxiv | Apr 1

Provides a formal framework for optimizing models whose decisions actively change the distribution of the data they encounter.

AI & ML arxiv | Apr 1

Introduces a rigorous algorithm to determine if two different neural networks share the same underlying 'algorithmic interpretation' without needing to manually define the circuits.

AI & ML arxiv | Apr 1

Replaces heuristic ReAct-style agent loops with a mathematical framework based on control theory to prevent LLM agents from over-deliberating or using excessive tools.

AI & ML arxiv | Apr 1

First foundation model to unify text, image, audio, and video using native masked diffusion instead of autoregressive serialization.

AI & ML arxiv | Apr 2

LLM-guided program evolution has discovered a new data-shuffling rule for SGD that provably and empirically outperforms standard Random Reshuffling.

AI & ML arxiv | Apr 2

A comprehensive analysis of AI safety vulnerabilities including automated circuit discovery, latent adversarial training, and power-law scaling of jailbreak success.

AI & ML arxiv | Apr 2

Identifies a fundamental quality-exploration dilemma in Diffusion Language Models where remasking improves single-sample quality but kills reasoning diversity.

AI & ML arxiv | Apr 2

Introduces training-free and model-free trajectory planning by computing diffusion score functions directly from data libraries via kernel-weighted estimation.

AI & ML arxiv | Apr 2

Proposes a decision-centric architecture that separates signal estimation from control policy to make LLM system decisions explicit and inspectable.

AI & ML arxiv | Apr 2

Truth Anchoring (TAC) provides a post-hoc calibration method to align LLM uncertainty metrics with actual factual correctness.

AI & ML arxiv | Apr 2

Identifies 'diversity collapse' in the popular GRPO reinforcement learning method and introduces MUPO to maintain broad reasoning paths.

AI & ML arxiv | Apr 2

Replaces manual rubric-tuning for synthetic data with an automated gradient-guided optimization framework based on influence estimation.

AI & ML arxiv | Apr 2

Introduces HiLL, a framework that jointly trains a 'hinter' and 'reasoner' to prevent advantage collapse in reinforcement learning for hard tasks.

AI & ML arxiv | Apr 2

LangMARL introduces agent-level credit assignment and policy gradient evolution directly in the natural language space for multi-agent coordination.

AI & ML arxiv | Apr 2

Stochastic Attention achieves a global receptive field in O(log n) layers by using randomized routing inspired by the fruit fly connectome.

AI & ML arxiv | Apr 2

Routing-Free MoE replaces centralized routing with individual expert-level activation, eliminating the need for Softmax and Top-K load balancing.

AI & ML arxiv | Apr 2

Policy Improvement Reinforcement Learning (PIRL) shifts the training objective from reward maximization to explicit maximization of policy progress across iterations.

AI & ML arxiv | Apr 2

Proposes dense point trajectories as universal 'visual tokens' for behavior that generalize across different species and non-rigid objects.

AI & ML arxiv | Apr 2

Achieves 'zero forgetting' in continual learning by stacking frozen domain-specific MoE-LoRA adapters with a meta-router.

AI & ML arxiv | Apr 2

Replaces standard relative Softmax attention with 'Multiscreening' to allow absolute query-key relevance, yielding 3.2x faster inference at 100K context.

AI & ML arxiv | Apr 2