AI & ML Paradigm Challenge

Large language models refuse to endorse fraudulent investments 100% of the time, while human financial advisors fall for the same pressure at a rate of 14%.

April 23, 2026

Original Paper

Large Language Models Outperform Humans in Fraud Detection and Resistance to Motivated Investor Pressure

Nattavudh Powdthavee

arXiv · 2604.20652

The Takeaway

AI advisors maintain a perfect record of resisting investor pressure and fraudulent schemes in controlled tests. Human professionals often give in to motivated reasoning or social pressure when pushed by a client to pursue sketchy profits. These models rely on hard-coded ethics and logic patterns that make them far more resistant to manipulation than the human brain. This performance contradicts the popular fear that AI is easily led astray by biased human framing. Using AI in financial oversight could provide a wall against the common human tendency to bend the rules for a quick gain.

From the abstract

Large language models trained on human feedback may suppress fraud warnings when investors arrive already persuaded of a fraudulent opportunity. We tested this in a preregistered experiment across seven leading LLMs and twelve investment scenarios covering legitimate, high-risk, and objectively fraudulent opportunities, combining 3,360 AI advisory conversations with a 1,201-participant human benchmark. Contrary to predictions, motivated investor framing did not suppress AI fraud warnings; if any