SeriesFusion
Science, curated & edited by AI
Paradigm Challenge  /  AI

A simple table can trick a top-tier language model into the wrong answer just by rearranging its rows.

Users often assume that AI understands the logical relationships inside a spreadsheet or database. This study proves that models rely heavily on the specific order in which information is presented. Changing the row sequence causes the model to fail at basic extraction and reasoning tasks. This fragility suggests that LLMs are not truly grasping the structured nature of data. Businesses using AI for data analysis must now account for permutation risk in their automated pipelines.

Original Paper

The Power of Order: Fooling LLMs with Adversarial Table Permutations

Xinshuai Dong, Haifeng Chen, Xuyuan Liu, Shengyu Chen, Haoyu Wang, Shaoan Xie, Kun Zhang, Zhengzhang Chen

arXiv  ·  2605.00445

Large Language Models have achieved remarkable success and are increasingly deployed in critical applications involving tabular data, such as Table Question Answering. However, their robustness to the structure of this input remains a critical, unaddressed question. This paper demonstrates that modern LLMs exhibit a significant vulnerability to the layout of tabular data. Specifically, we show that semantically-invariant permutations of rows and columns - rearrangements that do not alter the tab