After significant backlash last year, the company admitted in May that its automatic cropping algorithm repeatedly cropped out Black faces in favor of White ones. It also favored men over women, according to research from Twitter. Multiple Twitter users proved this fact using pictures of themselves or of famous figures, like former President Barack Obama. Rumman Chowdhury, director of Twitter META, explained that the company decided to change the algorithm and admitted that companies like Twitter often “find out about unintended ethical harms once they’ve already reached the public.” On Friday, Chowdhury and Twitter META product manager Jutta Williams unveiled the algorithmic bias bounty competition, which they said was part of this year’s DEF CON AI Village. “In May, we shared our approach to identifying bias in our saliency algorithm (also known as our image cropping algorithm), and we made our code available for others to reproduce our work. We want to take this work a step further by inviting and incentivizing the community to help identify potential harms of this algorithm beyond what we identified ourselves,” the two said. In creating the program, they were inspired by how the research and hacker communities helped the security field establish best practices for identifying and mitigating vulnerabilities in order to protect the public. They said Twitter wanted to build out a similar community but one focused on machine learning ethics that will help the company “identify a broader range of issues than we would be able to on our own.” “With this challenge we aim to set a precedent at Twitter, and in the industry, for proactive and collective identification of algorithmic harms,” Chowdhury and Williams wrote. “For this challenge, we are re-sharing our saliency model and the code used to generate a crop of an image given a predicted maximally salient point and asking participants to build their own assessment. Successful entries will consider both quantitative and qualitative methods in their approach.” There is a submission page on HackerOne where people can find more information, the rubric used to score each entry and details on how to enter. The entries will be judged by Ariel Herbert-Voss, Matt Mitchell, Peiter “Mudge” Zatko, and Patrick Hall. The first place winner will get $3,500, the second place winner gets $1,000 and third place gets $500. There will also be $1,000 rewards for most innovative and most generalizable. Winners of the competition will be announced at the DEF CON AI Village workshop on August 8. Williams told ZDNet that other than learning more about the photo cropping feature, she is expecting to learn what people think “harm” entails. “As a product manager, I endeavor to put myself in the shoes of people who use or are affected by our products to understand what a word like that means. Traditionally, we hear from people already looking at algorithmic bias – and I’m expecting that we’ll hear from a much broader community of people who will share a lot of perspective on what harm means to them,” Williams said. “Rumman floated the idea with me and our CTO after a conversation with the AI Village organizers – it takes a pretty risk-tolerant company to go first on something like this. Twitter leadership was willing – enthusiastic even. We didn’t have a lot of time to make the deadline for DEFCON, so the two of us got right into brainstorming how to scope something that we could release within the few weeks we had to make a go/no-go decision.” She added that the competition will make Twitter much wiser about how the next event should run and be instructive in making it easier for participants and more inclusive. The company is also hoping to learn more about how their technology may need to be immediately corrected, Williams explained, and more about how they can better prevent harm. The team will gain a better understanding of how to test and assess algorithms for biases, Williams said. Williams noted that there are many unknowns in the emerging field of study on machine learning bias and few programs actively address algorithmic risks. “I have hope we’ll have a few more unknowns that we can start working on solving. Most importantly, maybe, we’re going to learn about working with this community, ways to better measure and classify harms, what it takes to validate reports, ways to mitigate and/or prevent new harms in the future – all of which we can share back to the community,” Williams said. “This wasn’t run for our benefit alone – I wouldn’t personally have put the sweat equity into it if it weren’t for the goal of ultimate transparency.”