A Beauty Contest was Judged by AI and the Robots Didn’t like Dark Skin

Author: Sam Levin

Publisher: The Guardian

Publication Year: 2016

Summary: The following article discusses how in 2016, the first artificial intelligence (AI) judged beauty contest was conducted. The objective factors included facial symmetry and wrinkles. Beauty.AI, created by Youth Laboratories and supported by Microsoft, received 6,000 submissions from over 100 countries to identify those who resembled “human beauty.” However, the results were appalling. Out of the 44 winners, only some individuals were non-white. One person of color was selected and the rest of non-white winners were Asian. This was even more exacerbating when the majority of the submissions came from Africa and India. This sparked debates about ways in which algorithms can perpetuate biases and produce offensive outcomes. Alex Zhavoronkov of Beauty.AI stated the algorithm was biased because of the lack of diversity in the training data. Though it was not intended, minority groups were underrepresented in the data set, leading the algorithm to recognize certain patterns to make conclusions. Zhavoronkov aims to correct the algorithm and weed out discriminatory results by the next contest round.