Author: Jakub Wisniewski, Przemyslaw Biecek

Publisher: The R Journal

Publication Year: 2022

Summary: The following article discusses how as more sophisticated machine learning methods become more ubiquitous, a culture of classifying between “explainable” and “unexplainable” models has become the norm. Researchers at MI2 DataLab have developed an opensource R package which provides a convenient and flexible workflow for detecting, visualizing, and mitigating bias in machine learning models regardless of the specific model approach. This paper introduces the fairmodels R package and works through the conceptual background regarding different measures of fairness, worked examples of how to detect and explore fairness and bias in your model, and specific functions used to address and correct your models. This looks to be a tool growing in popularity and challenges the notion that fairness cannot be effectively pursued with black box models.