SeriesFusion
Science, curated & edited by AI
Paradigm Challenge  /  AI

Software designed to make bank loans fairer is actually just moving risk from one group to another without helping the people it was supposed to protect.

Algorithmic debiasing is often presented as a way to fix systemic inequality in credit scoring. This research reveals that these tools are often a mathematical shell game that reallocates risk rather than removing barriers. The statistics might look more balanced on paper, but the actual lives of marginalized applicants do not improve. These fairness adjustments often ignore the underlying economic realities of the people they are trying to assist. Banks and regulators need to stop relying on purely statistical fixes for what is fundamentally a social and economic problem.

Original Paper

Does Algorithmic Debiasing Really Improve Fair Lending Compliance?

Richard Pace

SSRN  ·  6702218

Algorithmic debiasing ("AD")-a popular machine-learning approach applied to credit scoring models to search for fairer, Less Discriminatory Alternative ("LDA") model versions-is promoted as a means to improve a lender's fair lending performance without sacrificing model predictive accuracy. Using a simple synthetic credit scoring model estimated on publicly available Home Mortgage Disclosure Act ("HMDA") data, I evaluate these claims by performing an in-depth investigation of how such fairness i