What Do We Do About the Biases in AI?

Author: James Manyika, Jake Silberg, Brittany Presten

Publisher: Harvard Business Review

Publication Year: 2019

Summary: The following article discusses how artificial intelligence (AI) can help identify and reduce the impact of human biases. But it can also make the problem worse by baking in and deploying biases at scale in sensitive areas. In Broward Country, Florida, an algorithm mislabeled African-American defendants as “high risk.” Bias can creep into algorithms in several ways. Amazon stopped using a hiring algorithm after finding it favored applicants based on words like “executed” or “captured” that were more commonly found on men’s resumes. Business and organizational leaders need to ensure that the AI systems they use improve on human decision-making. The second imperative is to accelerate the progress we have seen in addressing bias in AI. One of the most complex steps is also the most obvious — understanding and measuring “fairness.” Researchers have made progress on a wide variety of techniques that ensure AI systems can meet them. Business leaders need to stay up-to-date on AI research, and establish responsible processes that can mitigate bias. This could take the form of running algorithms alongside human decision makers, or using “explainability techniques” to pinpoint what led the model to reach a decision in order to understand why there may be differences. Third, consider how humans and machines can work together to mitigate bias. Transparency about algorithms’ confidence in its recommendation can help humans understand how much weight to give them. Invest more in diversifying the AI field itself. A more diverse AI community would be better equipped to anticipate, review, and spot bias. CEOs must be acutely aware of the risks of human biases in AI – and work to reduce them. Invest more in fact-based conversations around potential human biases, as well as how humans and machines can work together to mitigate them. A diverse AI community would be better equipped to anticipate, review, and spot bias and engage communities affected. Society has become less tolerant of such missteps.