In recent years, the use of algorithms coming from the Machine Learning literature has skyrocketed. Always seeking better performances at the price of more and more data, these algorithms have been a revolution in multiple fields. However, an increasing number of issues rose from the use of such models. In this thesis, we are interested in two particular issues: the lack of explainability and the lack of fairness potentially leading to discriminations that have been exhibited by these algorithms. We show how this two subjects are deeply linked by presenting a unifiying probabilist framework. Thanks to this, we can transpose techniques from one of these fields to another.