AI hiring tools change their tone based on a candidate's name even when the factual summary remains identical.
April 23, 2026
Original Paper
Bias in the Tails: How Name-conditioned Evaluative Framing in Resume Summaries Destabilizes LLM-based Hiring
arXiv · 2604.19984
The Takeaway
Large language models maintain accuracy but shift evaluative framing when processing resumes of people with different ethnic names. Most audits look for blatant factual errors or binary rejection bias. This analysis reveals a subtler instability that occurs at the extremes of the distribution where decisions are actually made. Recruiters might see more positive adjectives for one candidate and more cautious ones for another despite identical qualifications. Traditional fairness tests are missing the subtle linguistic shifts that steer human decision makers toward certain hires.
From the abstract
Research has documented LLMs' name-based bias in hiring and salary recommendations. In this paper, we instead consider a setting where LLMs generate candidate summaries for downstream assessment. In a large-scale controlled study, we analyze nearly one million resume summaries produced by 4 models under systematic race-gender name perturbations, using synthetic resumes and real-world job postings. By decomposing each summary into resume-grounded factual content and evaluative framing, we find th