You will likely ignore your car's safety warnings if the car 'acts' like it knows what it's doing, even if it's about to crash.
April 17, 2026
Original Paper
When AI Behavior Contradicts Its Interface: How Users Infer Authority in Semi-Automated Driving
SSRN · 6579747
The Takeaway
Humans are surprisingly easy to trick when it comes to authority. In semi-automated cars, people don't listen to the legal disclaimers or blinking lights; they watch how the car behaves. If the car feels confident and smooth, we assume it's in charge, even if the manual explicitly says we should be paying attention. This 'enacted authority' creates a dangerous gap where we trust the machine's 'vibes' over its actual technical limits. It reveals that our brains are wired to follow behavior over instructions, even in life-or-death situations.
From the abstract
When an AI system acts in a high-stakes setting, users face a practical question: who is in control? They can look for the answer in three places: what the interface tells them, what the system actually does, and whether it speaks up when something consequential happens. When these three signals agree, authority is clear; when they diverge, users must decide which signal to trust. This paper examines that problem in semi-automated driving, where on-screen messages may assign responsibility to th