His findings were released as part of a competition that Twitter designed to help discover AI issues within its saliency algorithm. The Algorithmic Bias Bounty Challenge, launched last month, gave out cash prizes in exchange for an assessment and rebuilding of the code, which has been criticized for its skewed image cropping and racial bias.
Nabbing first place and $3,500 USD, Kulynyc, who studies at the Swiss Federal Institute of Technology, confirmed these claims and uncovered more issues with the saliency algorithm. “The target model is biased towards deeming more ‘salient’ the depictions of people that appear slim, young, of light or warm skin color and smooth skin texture, and with stereotypically feminine facial traits,” the summary of his findings said.
“These internal biases inherently translate into harms of under-representation when the algorithm is applied in the wild, cropping out those who do not meet the algorithm’s preferences of body weight, age, skin color,” he added. “This bias could result in exclusion of minoritized populations and perpetuation of stereotypical beauty standards in thousands of images.”
Twitter celebrated Kulynyc’s submission to the competition, writing that his code “shows how algorithmic models amplify real-world biases and societal expectations of beauty.”
Four additional awards were given to a combination of individuals and start-ups for their contributions.