Author: Sam Levin
Publisher: The Guardian
Publication Year: 2017
Summary: The following article discusses how an artificial intelligence, developed by Stanford University, was used to guess people’s sexualities based on photos of their faces. The algorithm was 81% accurate with men and 74% with women. As many LGBTQ people face discrimination and harassment, the algorithm had a dangerous potential to violate people’s privacy and endanger their safety. The algorithm found various physical features to distinguish gay men and women from straight men and women. Because the algorithm performed better than human judges did, the algorithm would theoretically be able to “out” members of the LGBTQ community. Though the research was based on 35,000 publically available images, many people who identify as LGBTQ prefer to keep their sexualities private. The study also did not include any people of color, or consider transgender or bisexual people. The research intended to support the theory that sexuality is a genetic factor and that people are born into their orientation. This might be a supportive notion for the LGBTQ community, but if used without people’s consent could have terrible repercussions. Even with good intentions behind projects, ethical considerations still must be taken into account in order to prevent harm to the communities reflected in the data.