The widespread emergence of phenomena of bias is certainly among the most adverse impacts of new data-intensive sciences and technologies. The causes of such undesirable behaviours must be traced back to data themselves, as well as to certain design choices of machine learning algorithms. The task of modelling bias from a logical point of view requires to extend the vast family of defeasible logics and logics for uncertain reasoning with ones that capture some few, fundamental properties of biased predictions. However, a logically grounded approach to machine learning fairness is still at early stages in the literature. In this paper, we discuss current approaches to the topic, formulate general logical desiderata for logics to reason with and about bias, and provide a novel approach.