AI is Sending People to Jail — and Getting it Wrong

Author: Karen Hao

Publisher: MIT Technology Review

Publication Year: 2019

Summary: The following resource is a must-read for anyone who wants to know how unethical and plain incorrect use of machine learning can severely impact underrepresented individuals. In the article, the author describes various ways law enforcement agencies use predictive modeling to improve the efficiency of policing, including controversial face recognition systems. Still, by far, the most problematic tool used against suspects is the risk assessment tool that estimates the likelihood of reoffending. Based on the risk score, individuals can either be sent to jail or given more lenient service. The main issue with the model is that it is trained on historical crime data, which is very biased against specific racial groups due to historical reasons. The main takeaway from this article is that without considering socioeconomic factors behind the data, one might develop “accurate” yet extremely biased statistical models.