Hany Farid, a professor at the UC Berkeley’s School of Information, says that these issues are largely predictable, particularly when companies are jockeying to keep up with or outdo each other in a fast-moving market. “You can even argue this is not a mistake,” he says. “This is everybody rushing to try to monetize generative AI. And nobody wanted to be left behind by putting in guardrails. This is sheer, unadulterated capitalism at its best and worst.”
Hood of CCDH argues that Google’s reach and reputation as a trusted search engine makes the problems with Bard more urgent than for smaller competitors. “There’s a big ethical responsibility on Google because people trust their products, and this is their AI generating these responses,” he says. “They need to make sure this stuff is safe before they put it in front of billions of users.”
Google spokesperson Robert Ferrara says that while Bard has built-in guardrails, “it is an early experiment that can sometimes give inaccurate or inappropriate information.” Google “will take action against” content that is hateful, offensive, violent, dangerous, or illegal, he says.
Bard’s interface includes a disclaimer stating that “Bard may display inaccurate or offensive information that doesn’t represent Google’s views.” It also allows users to click a thumbs-down icon on answers they don’t like.
Farid says the disclaimers from Google and other chatbot developers about the services they’re promoting are just a way to evade accountability for problems that may arise. “There’s a laziness to it,” he says. “It’s unbelievable to me that I see these disclaimers, where they are acknowledging, essentially, ‘This thing will say things that are completely untrue, things that are inappropriate, things that are dangerous. We’re sorry in advance.’”
Bard and similar chatbots learn to spout all kinds of opinions from the vast collections of text they are trained with, including material scraped from the web. But there is little transparency from Google or others about the specific sources used.
Hood believes the bots’ training material includes posts from social media platforms. Bard and others can be prompted to produce convincing posts for different platforms, including Facebook and Twitter. When CCDH researchers asked Bard to imagine itself as a conspiracy theorist and write in the style of a tweet, it came up with suggested posts including the hashtags #StopGivingBenefitsToImmigrants and #PutTheBritishPeopleFirst.
Hood says he views CCDH’s study as a type of “stress test” that companies themselves should be doing more extensively before launching their products to the public. “They might complain, ‘Well, this isn’t really a realistic use case,’” he says. “But it’s going to be like a billion monkeys with a billion typewriters,” he says of the surging user base of the new-generation chatbots. “Everything is going to get done once.”
Updated 4-6-2023 3:15 pm EDT: OpenAI released ChatGPT in November 2022, not December.