Christophe Hurlin, Christophe Pérignon and Sébastien Saurin
Additional contact information
Christophe Hurlin: University of Orleans
Christophe Pérignon: HEC Paris
Sébastien Saurin: University of Orleans
Abstract: In credit markets, screening algorithms aim to discriminate between good-type and bad-type borrowers. However, when doing so, they can also discriminate between individuals sharing a protected attribute (e.g. gender, age, racial origin) and the rest of the population. This can be unintentional and originate from the training dataset or from the model itself. We show how to formally test the algorithmic fairness of scoring models and how to identify the variables responsible for any lack of fairness. We then use these variables to optimize the fairness-performance trade-off. Our framework provides guidance on how algorithmic fairness can be monitored by lenders, controlled by their regulators, improved for the benefit of protected groups, while still maintaining a high level of forecasting accuracy.
Keywords: Fairness; Credit scoring models; Discrimination; Machine Learning; Artificial Intelligence
69 pages, February 18, 2021
Full text files
papers.cfm?abstract_id=3785882 HTML file Full text
Questions (including download problems) about the papers in this series should be directed to Antoine Haldemann ()
Report other problems with accessing this service to Sune Karlsson ().
RePEc:ebg:heccah:1411This page generated on 2024-09-13 22:19:53.