Artificial Intelligence Has a Problem With Gender and Racial Bias. Here’s How to Solve It

Author: Joy Buolamwini

Publisher: Time Magazine

Publication Year: 2019

Summary: In the following article, notable data ethics advocate Joy Buolamwini shares her journey combating gender and racial bias in artificial intelligence (AI). After encountering biased facial analysis software that could not recognize dark-skinned faces, the author was motivated to seek similar examples of discriminatory AI in the industry. After recognizing the depth of the problem, Buolamwini launched a Safe Face Pledge, an initiative “designed to prohibit lethal use and lawless police use of facial analysis technology.” The article clearly explains the prevalence of racial and gender bias in AI by discussing face recognition algorithms sold by tech giants such as IBM, Microsoft, and Amazon that only accurately classify light-skinned men. The author calls for action to increase diversity in these firms and thus have a higher degree of representation and inclusivity.