Author: Richard Van Noorden

Publisher: Nature

Publication Year: 2020

Summary: In the following article, science journalist Richard Van Noorden discusses how facial recognition technology is being used for malicious purposes. The Chinese government has abused facial recognition technology to profile Uyghurs via mass surveillance (camera networks) in order to send them to “re-education camps.” The algorithm had been developed at American University and published in a scholarly journal, causing many people to ask why such algorithms were being proudly claimed in academia. Now a growing number of scientists are encouraging data scientists to refuse to work on unethical projects that could harm communities. Many people are asking, “why continue developing facial recognition technology when it is so hard to do it ethically?” In order for facial recognition models to be trained properly, many people’s faces are used in the training models against their consent. Pictures are pulled from places like Facebook and Instagram to train the models. To fix these problems, such models should be built in public to ensure developers are following ethical expectations. If a model starts to show it is causing harm in the world, it should be taken down. Simply raising awareness in the AI community will help reduce problems caused by the technology.