Author: Andrew Burt
Publisher: Harvard Business Review
Publication Year: 2020
Summary: The following article describes how ensuring that your AI algorithm doesn’t unintentionally discriminate against particular groups is a complex undertaking. What makes it so difficult in practice is that it is often extremely challenging to truly remove all proxies for protected classes. Determining what constitutes unintentional discrimination at a statistical level is also far from straightforward. So what should companies do to steer clear of employing discriminatory algorithms? A suggestion includes that they can start by looking to a host of legal and statistical precedents for measuring and ensuring algorithmic fairness.