Author: Eloise Goldsmith
Publisher: Who What Why
Publication Year: 2021
Summary: The following article discusses how Clearview artificial intelligence (AI) has a database with up to 10 billion faces and has biometric identifiers as well. U.S. Immigration and Customs Enforcement (ICE) has plans to continue partnering with them, however, ICE has been known to unethically use surveillance tools such as facial recognition. Clearview has been able to get billions of faces by scrapping them from public websites. The problem with the actions of Clearview is that what they are doing is invading people’s privacy by looking at other people’s faces. Not only is this a problem, but a problem also occurs when facial recognition struggles to identify individuals of color correctly. If an algorithm wrongfully identifies an individual then these individuals become the main target of wrongful actions such as arrest. Another problem with Clearview is that they have not been transparent about the algorithm they have created. They have not been able to show the public how accurate their own model is. It is important to know that as data scientist we should be as transparent as possible with people we are working with. We should not just create a model and use it for a companies purpose, but we should be able to openly talk about the model we have created and try to be as transparent as we can about the algorithms we have created. This does not mean we give away all the information, because this could cause other companies to steal and try to use the model for their own good, but we should be able to understand our model as best as possible and know the good parts of it and even the bad parts of it (if any).