Patient medical notes written by an AI are preferred over those from real doctors because the machine sounds more empathetic and clear.
April 25, 2026
Original Paper
Can "AI" Be a Doctor? A Study of Empathy, Readability, and Alignment in Clinical LLMs
arXiv · 2604.20791
The Takeaway
Medical satisfaction is becoming decoupled from medical accuracy in the age of clinical AI. While human physicians still possess superior technical knowledge, they often fail to communicate with warmth. This study shows that patients rate AI-generated explanations higher across almost every emotional category. People used to believe that the human touch was the one thing a machine could never replicate in a hospital. These results suggest that AI is already better at faking a good bedside manner than many professionals are at providing it.
From the abstract
Large Language Models (LLMs) are increasingly deployed in healthcare, yet their communicative alignment with clinical standards remains insufficiently quantified. We conduct a multidimensional evaluation of general-purpose and domain-specialized LLMs across structured medical explanations and real-world physician-patient interactions, analyzing semantic fidelity, readability, and affective resonance. Baseline models amplify affective polarity relative to physicians (Very Negative: 43.14-45.10% v