AI & ML New Capability

Enables privacy-preserving cross-model inference by using homomorphic encryption and linear alignment to map representations between independently trained LLMs.

March 20, 2026

Original Paper

Secure Linear Alignment of Large Language Models

Matt Gorbett, Suman Jana

arXiv · 2603.18908

The Takeaway

This framework allows two parties to perform collaborative inference without sharing raw data or model weights. It exploits the phenomenon of representational convergence to align disparate models through a secure, encrypted affine transformation with sub-second latency.

From the abstract

Language models increasingly appear to learn similar representations, despite differences in training objectives, architectures, and data modalities. This emerging compatibility between independently trained models introduces new opportunities for cross-model alignment to downstream objectives. Moreover, it unlocks new potential application domains, such as settings where security, privacy, or competitive constraints prohibit direct data or model sharing. In this work, we propose a privacy-prese