Ethical AI Team Says Biased Bounties Can Expose Algorithmic Flaws Faster

Biases in AI systems are proving to be a major stumbling block in efforts to integrate technology more broadly into our society. A new initiative that will reward researchers for finding biases in AI systems could help solve the problem.

The effort is modeled on the bug bounties software makers pay to cybersecurity experts who alert them of any potential security flaws in their products. The idea is not new; the “bias premiums” were first proposed by aI was a JB Rubinovitz researcher and entrepreneur in 2018, and various organizations have faced such challenges before.

But the new effort aims to create a permanent forum for biased bounty competitions that is independent of any particular organization. Made up of volunteers from a variety of companies, including Twitter, the so-called “Bias Buccaneers” plan to hold regular competitions, or “mutinies,” and launched the first such challenge earlier this month.

Bug bounties are a standard cybersecurity practice that has yet to find its way into the algorithmic bias community,” the organization said.Nice say on their site. “While early one-off events demonstrated bounty enthusiasm, Bias Buccaneers is the first nonprofit to create ongoing mutinies, collaborate with tech companies, and pave the way for transparent, repeatable appraisals. AI systems.”

This first contest aims to tackle bias in image detection algorithms, but rather than getting people to target specific AI systems, the competition go chinvite researchers to create tools capable of detecting biased data sets. The idea is to create a machine-learning model that can accurately label each image in a dataset with its skin tone, perceived gender, and age group. Contest ends November 30 and has a first prize of $6,000, a second prize of $4,000 and a third prize of $2,000.

The challenge lies in the fact that often the source of algorithmic bias is not so much the algorithm itself, but the nature of the data on which it is trained. Automated tools to quickly assess the balance of a collection of Images related to attributes that are often sources of discrimination could help AI researchers avoid clearly biased data sources.

But organizers say it’s just the first step in an effort to create a toolkit to assess bias in datasets, algorithms and applications, and ultimately create standards for how to deal with it.l with algorithmic bias, fairness and explainability.

This is not the only such effort. One of the leaders of the new initiative is Twitter’s Rumman Chowdhury, who helped organize the first AI bias bounty competition last year, targeting an algorithm the platform uses to crop images that users complained preferred white, masculine faces to black, feminine faces.

The contest gave hackers access to the company’s model and challenged them to find flaws in it. Attendees found a wide range of problemsincludehaving a preference for stereotypical beautiful faces, an aversion to people with white hair (a marker of age), and a preference for memes with English rather than Arabic script.

Stanford University also recently concluded a competition that challenged teams to come up with tools designed to help people audit commercially deployed or open-source AI systems for discrimination. And current and future European laws could require companies to regularly audit their data and algorithms.

But integrating AI bug bounties and algorithmic auditing and making them effective will be easier said than done. Inevitably, companies that build their business on their algorithms will resist any effort to discredit them.

Build on lessons learned from audit systems in other areas, such as finance and environmental and health regulation, researchers recently described some of the crucial ingredients of effective accountability. One of the most important Criteria that they identified was the significant involvement of independent third parties.

The researchers pointed out that current voluntary AI audits often involve conflicts of interest, such as the target organization paying for the audit, helping to define the scope of the audit, or having the opportunity to review the audits. results before they are made public. This concern was reflected in a recent report by the Algorithmic Justice Leaguewhoch noted the excessive sizeD role of target organizations in current cybersecurity bug bounty programs.

Finding a way to fund and support truly independent auditors and bug hunters will be a tall order, especially as they come up against some of the best-resourced companies in the world. Fortunately though, there seems to be a growing sense within the industry that fixing this issue will be key to maintaining user confidence in their services.

Image credit: Jakob Rosen / Unsplash

Leave a Reply

Your email address will not be published. Required fields are marked *