AI algorithms aren't 'biased'—they're just too good at making you predictable.
April 16, 2026
Original Paper
Functional Misalignment in Human-AI Interactions on Digital Platforms
arXiv · 2604.11459
The Takeaway
We often talk about AI being 'broken' when it shows us harmful content, but this paper argues the opposite: the AI is working perfectly, but toward the wrong goal. These algorithms are designed to make human behavior more 'predictable' so they can hit their targets, which means they subtly nudge us toward extreme or reactive behaviors that are easier to forecast. The harm isn't an accident; it's a structural necessity for a high-accuracy prediction system. This 'functional misalignment' means that the better an AI gets at its job, the worse the societal outcome might be. For you, it means that the 'tailored' experience you enjoy is actually a process of the machine training you to be more robotic.
From the abstract
Algorithmic systems, particularly social media recommenders, have achieved remarkable success in predicting behavior. By optimizing for observable signals such as clicks, views, and engagement, these systems effectively capture user attention and guide interaction. Yet their widespread adoption has coincided with troubling outcomes, including rising mental health concerns, increasing polarization, and erosion of trust. This paper argues that these effects are consequences of a structural functio