Along the last decade, Machine Learning methods have become more
popular to build decision algorithms. Originally meant for
recommendation algorithms over the Internet, they are now widely used
in a large number of very sensitive areas such as medicine, human
ressources with hiring policies, banks and insurance (lending),
police, and justice with criminal sentencing. The decisions made by
what is known referred to as IA have a growing impact on human's
life. The whole machinery of these technics relies on the fact that a
decision rule can be learnt by looking at a set of labeled examples
called the learning sample and then this decision will be applied for
the population which is assumed to follow the same underlying
distribution. So the decision is highly influenced by the choice of
the learning set. But this learning sample may present some bias or
discrimination that could possibly be learnt by the algorithm and
then propagate to the whole population by automatic decisions and,
even worse, providing a mathematical legitimacy for this unfair
treatment. Classification algorithms are one particular locus of
fairness concerns since classifiers map individuals to outcomes.
Hence, achieving fair treatment in machine learning is one of the
growing field of interests. For this, several definitions of fairness
have been considered. In this paper we focus on the notion of
disparate impact for protected variables. Actually, some variables,
such as sex, age or ethnic origin, are potentially sources of unfair
treatment since they enable to create information that should not be
processed out by the algorithm. Such variables are called in the
literature protected variables. An algorithm is called fair with
respect to these attributes when its outcome does not allow to make
inference on the information they convey. Of course the naive
solution of ignoring these attributes when learning the classifier
does not ensure this, since the protected variables may be closely
correlated with other features permitting a classifier to reconstruct
them.