AI & ML Paradigm Challenge

The most popular method for explaining AI decisions in finance is frequently providing explanations that are statistically no better than random noise.

April 26, 2026

Original Paper

A Cautionary Note on Interpretable Machine Learning for Macro-to-FX Transmission: Evidence from SHAP, Permutation Testing, and Walk-Forward Validation

SSRN · 6636458

The Takeaway

Interpretable machine learning tools like SHAP often create a mirage of understanding in complex fields like foreign exchange. Most macroeconomic signals actually have zero reliable predictive power for exchange rates when tested properly. The models often perform significantly worse than a simple random walk despite their convincing explanations. This reveals that many financial AI systems are just seeing patterns where none exist. Traders relying on these tools are essentially following a sophisticated version of a coin flip.

From the abstract

We ask a simple question: can interpretable ML reliably detect macroeconomic transmission to exchange rates? To find out, we apply XGBoost with SHAP to four EURbase currency pairs over 2000-2024, then run multiple layers of validation-permutation testing with bootstrap confidence intervals, Bonferroni and Benjamini-Hochberg correction, publication-lag robustness, a logistic regression benchmark, Granger causality pre-screening, and walk-forward Diebold-Mariano evaluation against the random walk.