The results of the first AI-judged beauty contest deemed racist
The first international beauty contest decided by an algorithm, Beauty.AI, has sparked controversy after the results revealed racial bias on part of the robots. Roughly 6,000 people from more than 100 countries submitted photos, hoping to be judged on objective factors such as facial symmetry and wrinkles. Out of 44 winners, nearly all were white, a handful were Asian, and one had dark skin.
What is Artificial Intelligence?
AI is intelligence exhibited by machines and is applied when a machine mimics cognitive-functions associated with human minds, like learning and problem solving. An ideal "intelligent" machine is a rational agent that perceives its environment and takes actions that maximize its chances of success.
The Beauty.AI contest
Beauty.AI is a set of algorithms created by a "deep learning" group called Youth Laboratories, which is supported by Microsoft. It judged more than 6,000 selfies of individuals all over the world, between the ages of 18 to 69 . The five robot judges used artificial intelligence to analyze specific traits to determine which faces most closely resembled the idea of "human beauty."
Contest judges
Beauty.AI used five algorithms to act as judges: RYNKL scored people on their youthfulness, PIMPL analyzed pimples and pigmentation, Symmetry Master evaluated facial-symmetry, AntiAgeist estimated the chronological and perceived age difference and MADIS compared them with stored models.
How can AI turn out racist results?
According to Alex Zhavoronkov, Beauty.AI's chief science officer, the main problem was that the data used to establish standards of attractiveness did not include enough minorities, leading to biased results. 75% of the entrants were European, 7% were Indian, and only 1% were African. This has sparked renewed debates about the ways in which algorithms can perpetuate biases, yielding unintended and often offensive results.
Similar AI glitches in the past
Last year, Google's photo app was found to have labeled black people as gorillas. In March 2016, Microsoft released their chatbot Tay, which started using racist language and promoting neo-Nazi views on Twitter. Last month, just after Facebook replaced human editors with an algorithm, who had earlier been curating "trending" news stories, it immediately began promoting fake and vulgar stories on news feed.