Author: Matthew Hutson
Publisher: Science
Publication Year: 2017
Summary: The following article discusses how artificial intelligence (AI) is just an extension of our existing culture. There is a common misconception that since artificial intelligence is automated and programmed, it is immune from biases because it makes decisions without the added factor of human emotions. However, these AI systems are taking in loads of human input and are in fact trained by humans, so inevitably they adopt some pattern of human behavior behind the scenes. This article talks about an automated tool that takes in human names and assigns them to words that they most likely correspond with. The algorithm assigned common white names to more positive sets of words and common black names to more negative sets of words, showing that in some way bias was paved into the system in a detrimental and discriminatory way. Computers do exactly what we tell them to do, so we have to be extremely careful with how we train and build the programs and models that they run, and data scientists need to test their models to ensure that unintentional bias is accounted for.