AI Trained on 4Chan Becomes ‘Hate Speech Machine’

Author: Matthew Gault

Publisher: Vice

Publication Year: 2022

Summary: The following article serves as an extreme example of unethical data practices. There are 2 important takeaways from this article. First, artificial intelligence (AI) professionals must be careful to ensure that the data their models are trained on have as little bias as possible. Kilcher’s model was trained on extremely biased data, so it had no choice but to produce extremely biased results. Second, Kilcher could have benefitted from having other people on this project to consider the ethics and potential harm. Kilcher did not see anything wrong with the way he used the data because he rationalized that nobody was being directly harmed. Having other people that he respected review his work before deployment could have prevented his work from causing the harm that it did.