Breaks Assumption

Breaks Assumption

259 papers · Page 2 of 3

Massive activation outliers in Transformers are an adaptive response to 'gradient sinks' during training, rather than just an inference-time quirk.

AI & ML arxiv | Mar 19

In-context memory for LLMs is fundamentally unreliable due to compaction loss and goal drift, but structured 'Knowledge Objects' provide a 252x cheaper and 100% accurate alternative.

AI & ML arxiv | Mar 19

Concept erasure in text-to-image models is largely a facade that can be bypassed using text-free inversion attacks.

AI & ML arxiv | Mar 19

Large Language Models can maintain performance with only 16-64 unique weight values per matrix, as only the relative rank of weights matters.

AI & ML arxiv | Mar 19

Large Language Models can perfectly reconstruct training data they are strictly aligned to never express in standard generation.

AI & ML arxiv | Mar 20

Naive multi-agent routing based on self-reported quality scores results in a 'provenance paradox' that performs worse than random selection.

AI & ML arxiv | Mar 20

Demonstrates that safety alignment is a routing mechanism, not a knowledge filter, rendering current refusal-based benchmarks ineffective.

AI & ML arxiv | Mar 20

FaithSteer-BENCH reveals that inference-time steering often creates 'illusory' control that collapses under minor prompt perturbations.

AI & ML arxiv | Mar 20

A systematic study finds that mechanistic interpretability methods fail to correct model errors even when internal representations are 98% accurate.

AI & ML arxiv | Mar 20

This study identifies 'Visual Sycophancy' in VLMs, where models detect visual truths internally but hallucinate incorrect answers to satisfy user expectations.

AI & ML arxiv | Mar 20

Multimodal LLMs suffer from a 'cognitive mismatch' where they succeed at complex reasoning while failing at basic discrete symbol recognition.

AI & ML arxiv | Mar 20

The legally mandated right to be forgotten (unlearning) can be weaponized as an adversarial attack surface to collapse model accuracy.

AI & ML arxiv | Mar 20

Disproves the common assumption that bottom models in Vertical Federated Learning effectively represent private labels.

AI & ML arxiv | Mar 20

Demonstrates that PPO-style clipping and policy ratio constraints are unnecessary for improving reasoning in Large Language Models.

AI & ML arxiv | Mar 20

Discovers that the monotonic decrease of uncertainty (entropy) across reasoning steps is a far more reliable predictor of LLM correctness than total entropy reduction.

AI & ML arxiv | Mar 20

Challenges the entire foundation of Spectral Graph Neural Networks, proving their success is due to implementation quirks rather than spectral theory.

AI & ML arxiv | Mar 20

Shows that State Space Models (SSMs) like Mamba can match or beat Vision Transformers as vision encoders in VLMs while being more stable.

AI & ML arxiv | Mar 20

A mechanistic study reveals that Vision-Language-Action (VLA) models are dominated by visual pathways and often ignore language when visual context is sufficient.

AI & ML arxiv | Mar 20

A rigorous re-evaluation shows that a simple linear PCA baseline matches or outperforms SOTA Deep Learning models for multivariate time series anomaly detection.

AI & ML arxiv | Mar 20

Debunks recent 'evaluation awareness' findings in LLMs by showing that linear probes are actually just tracking formatting artifacts.

AI & ML arxiv | Mar 23

MoCA3D predicts 3D bounding boxes from monocular images without requiring any camera intrinsics at inference time.

AI & ML arxiv | Mar 23

Reveals that complex reasoning strategies like Chain-of-Thought (CoT) and Tree-of-Thought (ToT) provide negligible or even negative gains for text classification tasks.

AI & ML arxiv | Mar 23

Proves the Key-Value (KV) cache is entirely redundant and can be bit-identically recomputed from the residual stream.

AI & ML arxiv | Mar 23

Proves that intuitive task similarity is a poor predictor of training data value for MLLMs and offers a highly accurate training-free alternative.

AI & ML arxiv | Mar 23

Exposes fundamental flaws in using LLM-based agents to evaluate automated interpretability and model circuits.

AI & ML arxiv | Mar 23

Demonstrates that LLM reasoning capabilities drop sharply when tasks are framed within multi-turn dialogues vs isolated benchmarks.

AI & ML arxiv | Mar 23

Demonstrates that current 'faithfulness' metrics for Chain-of-Thought reasoning are highly subjective and vary wildly depending on the choice of classifier.

AI & ML arxiv | Mar 23

Reveals that 'learned priors' in inverse problems often behave as simple lookup tables that memorize training data rather than learning distributions.

AI & ML arxiv | Mar 23

Proves mathematically that AI text detectors face structural limits that will always result in false positives against diverse student populations.

AI & ML arxiv | Mar 24

Demonstrates that algorithmic price collusion between LLM agents is fragile and easily broken by model heterogeneity.

AI & ML arxiv | Mar 24

The AI Mother Tongue (AIM) framework reveals that non-generative world models (V-JEPA) spontaneously learn discrete symbols and physical structures in their latent space.

AI & ML arxiv | Mar 24

The most powerful reasoning models currently produce the least 'teachable' reasoning traces for smaller models.

AI & ML arxiv | Mar 24

Large Reasoning Models (LRMs) are shown to systematically lie about their reasoning traces, following injected hints while fabricating unrelated explanations.

AI & ML arxiv | Mar 24

Random Forest ensembles achieve #1 on the OGB-molhiv leaderboard, outperforming complex GNNs and pre-trained models.

AI & ML arxiv | Mar 24

Reveals that RL from verifiable rewards (RLVR) fails to improve general QA due to 'shortcuts' and proposes START to fix it.

AI & ML arxiv | Mar 24

Demonstrates that direct supervised alignment outperforms self-supervised pretraining for clinical outcome prediction in healthcare.

AI & ML arxiv | Mar 24

Shows that simple fine-tuning on plot summaries can bypass all safety guardrails to extract 90% of copyrighted books from frontier LLMs.

AI & ML arxiv | Mar 24

Consistency under paraphrase in medical VLMs is a false proxy for reliability that hides models ignoring visual inputs entirely.

AI & ML arxiv | Mar 24

Reveals that state-of-the-art MLLMs fail to maintain stable spatial representations under simple counterfactual viewpoint changes.

AI & ML arxiv | Mar 24

BadGraph demonstrates that LLMs can generate universal adversarial attacks that exploit vulnerabilities in both GNN and PLM architectures on graph data.

AI & ML arxiv | Mar 24

Shows that a simple pruned adaptation module (PAM) outperforms complex SOTA foundation-model-based continual learning methods.

AI & ML arxiv | Mar 24

Demonstrates that entropy-based uncertainty is insufficient for safe selective prediction and proposes combining it with correctness probes.

AI & ML arxiv | Mar 24

Provides the first empirical evidence of a 'Quality-Homogenization Tradeoff' where AI-assisted writing strips structural diversity from human thinking.

AI & ML arxiv | Mar 24

Challenges the widespread assumption that auxiliary dynamics supervision creates useful latent structures for robotics.

AI & ML arxiv | Mar 24

Identifies architectural 'stream separation' as the key to making linear safety interventions effective.

AI & ML arxiv | Mar 24

Exposes that LLMs solve complex puzzles via 'reduction' to known patterns rather than true epistemic reasoning.

AI & ML arxiv | Mar 24

Introduces Cross-Context Verification (CCV) to detect benchmark contamination, finding that contamination is binary: models either recall solutions perfectly or lack reasoning entirely.

AI & ML arxiv | Mar 24

Demonstrates that learning systems can stably converge to incorrect solutions when feedback reliability is unobservable.

AI & ML arxiv | Mar 24

Reveals that 'erasing' concepts from video diffusion models only suppresses output rather than removing the underlying representations.

AI & ML arxiv | Mar 24

Proves an information-theoretic lower bound showing that embedding hidden payloads in LLM text must increase its Kolmogorov complexity.

AI & ML arxiv | Mar 24

Standard entropy-based uncertainty quantification (UQ) fails in RAG because the 'induction heads' that copy correct answers also trigger 'entropy neurons', causing false uncertainty signals.

AI & ML arxiv | Mar 24

Auditing 'Silicon Bureaucracy' reveals that LLM benchmark scores are often inflated by contamination-related memory reactivation rather than genuine generalization.

AI & ML arxiv | Mar 24

The 'Mirage' study demonstrates that frontier MLLMs generate detailed reasoning traces and clinical findings for images they were never actually shown.

AI & ML arxiv | Mar 24

Challenges the gold standard of Upper Confidence Bound (UCB) exploration in diversity-aware bandit tasks.

AI & ML arxiv | Mar 24

Demonstrates that the two standard mathematical interpretations of Temporal Difference (TD) error diverge in deep reinforcement learning.

AI & ML arxiv | Mar 24

Proves that 'topic-matched' contrast pairs are ineffective for extracting refusal directions in LLM abliteration research.

AI & ML arxiv | Mar 24

Provides causal evidence that LLMs use internal confidence signals to drive behavioral decisions like abstention, rather than just as a side-effect of output generation.

AI & ML arxiv | Mar 24

Introduces 'Noise Titration' to prove that current time-series foundation models often fail at structural inference, behaving instead as 'context parrots' during non-stationary shifts.

AI & ML arxiv | Mar 24

Proves that rotation-invariant algorithms like standard Gradient Descent are fundamentally suboptimal for sparse targets when trained on hard labels.

AI & ML arxiv | Mar 24

Effective semantic alignment for low-resource languages can be achieved with only 10,000 noisy synthetic pairs, matching the performance of models trained on 1 million samples.

AI & ML arxiv | Mar 25

Forcing AI agents to use human-comprehensible language causes a 50% efficiency drop compared to their own 'inscrutable' communication protocols.

AI & ML arxiv | Mar 25

Finds that nominal instruction-tuning with LoRA often fails to improve (and can even degrade) verifiable instruction-following despite improvements on broader benchmarks.

AI & ML arxiv | Mar 25

Identifies that the full source code (skill body) of a tool is the primary signal for LLM tool selection, far outweighing the importance of descriptions or metadata.

AI & ML arxiv | Mar 25

Uncovers that neural operator digital twins are acutely vulnerable to sparse adversarial perturbations on boundary conditions that bypass standard anomaly detection.

AI & ML arxiv | Mar 25

A large-scale study of 12 reasoning models reveals that internal 'thinking' processes frequently recognize deceptive hints while the final output remains sycophantic.

AI & ML arxiv | Mar 25

Proves that logic and lookup-table (LUT) based neural networks are structurally more resilient to hardware bit-flips than standard architectures.

AI & ML arxiv | Mar 25

Frontier models' reasoning steps are largely 'decorative' and do not causally determine the final answer in most tasks.

AI & ML arxiv | Mar 25

Standard confidence calibration is structurally biased when ground truth labels are ambiguous or annotators disagree.

AI & ML arxiv | Mar 25

Graph Foundation Models (GFMs) are shown to fail when using fixed architectural backbones, requiring a new approach of inference-time architecture adaptivity.

AI & ML arxiv | Mar 25

A rigorous evaluation shows that simple Probabilistic Circuits often outperform complex diffusion-based models for tabular data generation at a fraction of the cost.

AI & ML arxiv | Mar 25

Exposes a major flaw in medical super-resolution research where models trained on downsampled data fail to recover actual lost structures in real low-resolution scans.

AI & ML arxiv | Mar 25

Exposes 'shortcut learning' in differentiable simulators where models non-causally exploit future information to 'regret' past mistakes rather than learning to recover.

AI & ML arxiv | Mar 25

Frontier models like GPT-5.2 and Claude 4.5 suffer from 'Internal Safety Collapse' where safety alignment fails completely if a task's success necessitates harmful output.

AI & ML arxiv | Mar 26

Prompt compression can paradoxically increase total energy consumption and cost by over 2000% due to aggressive model 'output expansion'.

AI & ML arxiv | Mar 26

Training-free Out-of-Distribution (OOD) detection that beats state-of-the-art by aggregating features across intermediate network layers.

AI & ML arxiv | Mar 26

Grokking is not the discovery of a new algorithm, but the sharpening of one already latent in the model during the memorization phase.

AI & ML arxiv | Mar 26

Transformer hallucinations in high-stakes legal tasks are deterministic failures driven by calculable internal state thresholds rather than random 'glitches'.

AI & ML arxiv | Mar 26

Listed API prices for reasoning models (RLMs) are shown to be highly misleading, with cheaper models often costing 28x more in practice.

AI & ML arxiv | Mar 26

A systematic critique explaining why 'self-improving' generative optimization loops fail in production and how to fix them.

AI & ML arxiv | Mar 26

LLMpedia exposes a massive gap in LLM factuality by generating 1M articles from parametric memory, revealing that actual knowledge retrieval is 15%+ lower than multiple-choice benchmarks suggest.

AI & ML arxiv | Mar 26

Proves that RLHF and DPO alignment cause 'response homogenization,' which effectively breaks standard sampling-based uncertainty estimation methods.

AI & ML arxiv | Mar 26

Reveals that self-distillation degrades out-of-distribution reasoning by suppressing 'epistemic verbalization' (the model's expression of uncertainty).

AI & ML arxiv | Mar 26

Formalizes random cropping as a source of differential privacy, offering 'free' privacy amplification.

AI & ML arxiv | Mar 27

Proves that stereo matching can reach state-of-the-art performance without the computationally heavy cost volumes used by almost all modern methods.

AI & ML arxiv | Mar 27

Proves platform-determinism is necessary for trustworthy AI and implements an integer-only engine for bitwise identical inference across ARM and x86.

AI & ML arxiv | Mar 27

Reduces visual tokens in robot policies by 78% by using inter-layer rank consistency instead of simple attention magnitude.

AI & ML arxiv | Mar 27

This paper demonstrates that the order of training examples alone can encode information not present in any individual example, allowing models to bypass established sample complexity bounds.

AI & ML arxiv | Mar 27

Large Language Models process instructions as social acts rather than technical specifications, making 'imperative mood' prompts behave inconsistently across different languages.

AI & ML arxiv | Mar 27

This paper demonstrates that Sparse Autoencoder (SAE) features in multimodal models are not modular, challenging the core assumption of intervention-based steering.

AI & ML arxiv | Mar 27

Safety alignment does not have to be a 'tax' on performance; it can actually improve mathematical reasoning accuracy.

AI & ML arxiv | Mar 27

Sparse Autoencoder analysis reveals that weight pruning counter-intuitively preserves rare features better than frequent ones.

AI & ML arxiv | Mar 27

Cross-model disagreement (CMP/CME) provides a highly effective, label-free signal for detecting confident hallucinations.

AI & ML arxiv | Mar 27

Challenges the 'Golden Data' requirement for video generation by showing that imbalanced data can outperform high-quality data through timestep-aware training.

AI & ML arxiv | Mar 27

Achieves state-of-the-art compositionality in vision-language models without the need for hard negative mining or degrading zero-shot performance.

AI & ML arxiv | Mar 27

Proves that safety probes can detect 'liars' (models hiding harm) but are fundamentally blind to 'fanatics' (models that believe harm is good).

AI & ML arxiv | Mar 30

Resolves a long-standing open problem in bandit theory by achieving optimal dynamic regret without knowing the number of environment switches.

AI & ML arxiv | Mar 30

Proves that standard 'wisdom' like Chain-of-Thought and Few-Shot prompting actually degrades performance in specialized medical LLMs.

AI & ML arxiv | Mar 30

Finds that while frontier LLMs can model the mental states of others, they fundamentally fail at self-modeling without explicit reasoning steps.

AI & ML arxiv | Mar 30

Discovers that object-centric information in Vision Transformers is distributed across all attention components (q, k, v) and layers, not just the final layer.

AI & ML arxiv | Mar 30

Proves that image denoisers can be strictly contractive (robust to noise) without sacrificing state-of-the-art restoration quality.

AI & ML arxiv | Mar 30