Racist Data? Human Bias is Infecting AI Development

Author: John Murray

Publisher: The Guardian

Publication Year: 2019

Summary: The following article gives several real-life examples of times when unchecked bias had serious consequences. For example, unchecked bias in the COMPAS program caused Black defendants to be incorrectly flagged as probable reoffenders almost twice as often as white defendants. People do not usually intend to make data biased against marginalized groups, but there must be an intentional desire to find and counteract the bias in data in order to prevent it from affecting the model. Bias must be considered throughout the model building process: from when data is first collected for the training data set until the models are being tested. If bias is not considered, the effects can change lives.


Posted

in

, ,

by