Inference-time 'steering' of Code LLMs allows for precise control over programming languages and libraries without prompting or fine-tuning.
March 26, 2026
Original Paper
Steering Code LLMs with Activation Directions for Language and Library Control
arXiv · 2603.23629
The Takeaway
By identifying linear directions in activation space for specific ecosystems (e.g., PyTorch vs. TensorFlow), the authors demonstrate that you can force a model to use a specific library even when the prompt asks for the opposite. This enables developers to enforce coding standards or migrate libraries at the inference level.
From the abstract
Code LLMs often default to particular programming languages and libraries under neutral prompts. We investigate whether these preferences are encoded as approximately linear directions in activation space that can be manipulated at inference time. Using a difference-in-means method, we estimate layer-wise steering vectors for five language/library pairs and add them to model hidden states during generation. Across three open-weight code LLMs, these interventions substantially increase generation