The White House–supported hacking exercise designed to expose weaknesses in generative AI systems will take place this summer at the Defcon security conference. Thousands of participants, including hackers and policy experts, will be asked to explore how generative models from companies including Google, Nvidia, and Stability AI align with the Biden administration’s AI Bill of Rights announced in 2022 and a National Institute of Standards and Technology risk management framework released earlier this year.
Points will be awarded under a capture-the-flag format to encourage participants to test for a wide range of bugs or unsavory behavior from the AI systems. The event will be carried out in consultation with Microsoft, nonprofit SeedAI, the AI Vulnerability Database, and Humane Intelligence, a nonprofit created by data and social scientist Rumman Chowdhury. She previously led a group at Twitter working on ethics and machine learning, and hosted a bias bounty that uncovered bias in the social network’s automatic photo cropping.
The AI Now Institute, a nonprofit that has advised lawmakers and federal agencies on AI regulation, argued in a report released last month that because systems ChatGPT can be fine-tuned for a range of uses, they deserve more regulatory scrutiny than previous forms of AI.
Sarah Myers West, managing director of the AI Now Institute and a coauthor of that report, says the renewed interest in AI by federal regulators is welcome. But she says it remains to be seen how meaningful their actions will be. “We just can’t afford to confuse the right noises for enforceable regulation right now,” West says.
She is also wary of how tech companies seeking profits with AI appear to be closely involved with the White House’s new attention to the technology. “We would be remiss to take an approach that leaves it to them to lead the conversation on what constitutes trustworthy and responsible innovation,” she says. “It’s for regulators and the broader public to define what responsible development of technology looks like.”
At a briefing yesterday, a White House official said that companies developing AI should be partners in ensuring the technology is used responsibly, adding that businesses also have a responsibility to make sure products are safe before they’re deployed in public.
Beyond companies developing AI for profit, federal agencies have some work to do on their own use of AI. A December 2022 study from Stanford University found that virtually no federal agencies responded to a Trump-era executive order to provide AI plans to the public and only around half have shared an inventory of how they use AI. The White House Office of Management and Budget will release new guidelines for federal agency use of AI in the coming months.