articlesresources
Data speak but sometimes lie: A game-theoretic approach to data bias and algorithmic fairness
February 23, 2026
In the present work, we develop a novel information-theoretic and logic-based approach to data bias in Machine Learning predictions and show its relevance in the specific context of fairness evaluation. We frame predictions made on biased data as Ulam games, which formalise key aspects of data-driven inference, and from which a variation of the rational non-monotonic consequence relation can be defined. We investigate this framework to model how differential levels of noise in input features impact Machine Learning predictions. To the best of our knowledge, this is the first game-theoretic formalisation of ML unfairness.
authors:
Chiara Manganini, Esther Anna Corsi
,and Giuseppe Primiero
keywords:
Machine learning, Bias, Data quality, Data-driven inference, Non-monotonic logic





