At MIRAI, we believe that fairness in AI is a shared responsibility. In our latest challenge with a partner in digital development, we applied our proprietary BRIO framework to evaluate the fairness of a machine learning model designed to predict employee resignations.
category:
keywords:

The Challenge
The task: assess whether the model’s predictions were fair across sensitive attributes like education, age, and seniority — especially when conditioned on business-relevant features such as performance evaluation, department, and job classification. The dataset on which the model has been tested contains both prediction as well as the ground truth on over 2000 datapoints. The input variables encompass demographic, and company-related features. The binary label indicates the prediction of the model in terms of its ability to distinguish between expected resignation and non-resignation for each individual.
As a preliminary study to assess bias risk of this model, the following features are considered to be the sensitive:
• Education Level (3 classes)
• Age (binned into 4 classes)
• Seniority (3 classes)
For the present test, the following are considered to be the business features of interest
• Pay Grade (6 classes)
• Departmental Function (4 classes)
• Earliest Role (4 classes)
• Latest Role (4 classes)
• Performance Evaluation (3 classes)
Our Approach
Using the BRIO (Bias-RIsk-Opacity) tool, we conducted a multi-layered analysis:
– Model Explainability: Identified which business features most influenced biased outcomes.
– Data Fairness: Measured how much the model amplified bias compared to the ground truth.
– Model Fairness: Evaluated fairness violations across protected groups.
– Compliance & Risk: Quantified the overall risk of unfairness.
Key Findings
– The model significantly amplifies bias across all sensitive features.
– Education alone wasn’t a major driver of unfairness — but age and seniority were.
– Business features like performance evaluation and department role strongly influenced biased predictions.
– The overall fairness risk? A concerning 84%.
Why It Matters
Predictive models in HR can shape careers and lives. Ensuring these systems are fair is essential — not just for compliance, but for trust and ethical responsibility. On the basis of our risk evaluation companies may decide which individuals should be better protected against likelihood of resignation.
What’s Next
We recommended targeted mitigation strategies and comparative model testing. BRIO‘s traceability and auditability features also ensure long-term accountability.
Ready to Level Up Your AI?
With BRIO, MIRAI empowers organizations to detect, quantify, and mitigate unfairness in AI-enabled resignation prediction. It balances ethical responsibility with business needs—fueling trust, compliance, and smarter lending.
Curious to see BRIO in action on your systems? Reach out to MIRAI to learn how fairness-aware AI can transform your risk analysis practice.
References:
Coraglia et al., Evaluating AI fairness in credit scoring with the BRIO tool, arXiv (June 2024) (arxiv.org, researchgate.net).
Buda et al., Bias Amplification Chains in ML-based Systems with an Application to Credit Scoring, BEWARE 2024 (researchgate.net, air.unimi.it).