Providing five examples of a physics problem makes a language model forget the scientific formulas it already knew.
Few-shot examples are generally expected to boost performance by clarifying the task. This study shows that examples can trigger a knowledge displacement effect. The model stops using its deep pre-trained understanding of scientific laws and switches to simple pattern matching based on the prompt. This shift often results in lower accuracy on complex technical tasks. It implies that for highly specialized domains, less context may actually lead to more intelligent results. Users should be cautious about adding examples when the model already possesses the necessary domain knowledge.
In-Context Examples Suppress Scientific Knowledge Recall in LLMs
arXiv · 2604.27540
Scientific reasoning rarely stops at what is directly observable; it often requires uncovering hidden structure from data. From estimating reaction constants in chemistry to inferring demand elasticities in economics, this latent structure recovery is what distinguishes scientific reasoning from curve fitting. Large language models (LLMs) can often recall and apply relevant scientific formulas, but we show that this ability is surprisingly easy to suppress. We show that adding in-context example