Author: Jaime Brandon
Publisher: Springer Link
Publication Year: 2020
Summary: The following article discusses how it is commonly known that there are lots of algorithms that are biased in one way or another. In this paper, the researchers did not block any bias from forming in their model to see what they can learn from that process. The article states that transparency with biases instigates change. By choosing not to debias the model, analysts will be more aware of problems that exist. Constructed models often reflect biases that its engineers and users have in the real world, and we cannot address the problems with our data analyses if we continuously block and debias the models/algorithms that are created. The article cites Microsoft’s Twitter Bot and virtual assistants as real-life examples of biased AI gone wrong. Overall, the main takeaway is that there is a lot that data analysts can learn from biased models and algorithms.