Bias Amplification Chains in ML-based Systems with an Application to Credit Scoring
Machine Learning (ML) systems, whether predictive or generative, not only reproduce biases and stereotypes but, even more worryingly, amplify them. Strategies for bias detection and mitigation typically focus on either ex post or ex ante approaches, but are always limited to two steps analyses. In this paper, we introduce the notion of Bias Amplification Chain (BAC) as a series of steps in which bias may be amplified during the design, development and deployment phases of trained models. We provide an application to such notion in the credit scoring setting and a quantitative analysis through the BRIO tool.We apply the BRIO fairness metrics to several, socially sensitive attributes featured in the German Credit Dataset, quantifying fairness across various demographic segments, with the aim of identifying potential sources of bias and discrimination in a credit scoring model. We conclude by combining our results with a revenue analysis.