Can We Protect AI from Our Biases?

Author: Robin Hauser

Publisher: TED

Publication Year: N/A

Summary: In the following talk, Robin Hauser talks about unconscious bias in artificial intelligence (AI) algorithms. As part of producing her new movie about unconscious bias, she became interested in finding out whether it would be possible to create AI without bias. As she came to find out, it is oftentimes harder to create unbiased algorithms for multiple reasons. The first reason is that the humans who are creating AI algorithms are biased. Even if some of these biases are unconscious, they will come through in the algorithm. Secondly, algorithms are designed to learn from the input that is provided to them. This can have negative effects, because some algorithms might become biased through the input that is being provided after deployment of those algorithms. For example, the chatbot Tay learned quickly from the input it was being fed on the internet and had to be taken down. Similarly, IBM’s team around Watson allowed him to read the Urban Dictionary, which prompted Watson to swear. Hauser also talks about how word embedding in translating services can include gender bias, specifically around professions that are gender biased. Lastly, she touches on the criminal justice system, that uses historical data to score defendants on their danger to society. Hauser’s message around these issues is a phrase that is commonly used in the world of data science, “Garbage in, garbage out.” If the datasets that are being fed into an AI algorithm are not representative of reality, these algorithms will also become biased.

Comments

Leave a Reply

Your email address will not be published. Required fields are marked *