Late last year, I attended an event hosted by Google to celebrate its AI advances. The company’s domain in New York’s Chelsea neighborhood now extends literally onto the Hudson River, and about a hundred of us gathered in a pierside exhibition space to watch scripted presentations from executives and demos of the latest advances. Speaking remotely from the West Coast, the company’s high priest of computation, Jeff Dean, promised “a hopeful vision for the future.”
The theme of the day was “exploring the (im)possible.” We learned how Google’s AI was being put to use fighting wildfires, forecasting floods, and assessing retinal disease. But the stars of this show were what Google called “generative AI models.” These are the content machines, schooled on massive training sets of data, designed to churn out writings, images, and even computer code that once only humans could hope to produce.
Something weird is happening in the world of AI. In the early part of this century, the field burst out of a lethargy—known as an AI winter—by the innovation of “deep learning” led by three academics. This approach to AI transformed the field and made many of our applications more useful, powering language translations, search, Uber routing, and just about everything that has “smart” as part of its name. We’ve spent a dozen years in this AI springtime. But in the past year or so there has been a dramatic aftershock to that earthquake as a sudden profusion of mind-bending generative models have appeared.
Most of the toys Google demoed on the pier in New York showed the fruits of generative models like its flagship large language model, called LaMDA. It can answer questions and work with creative writers to make stories. Other projects can produce 3D images from text prompts or even help to produce videos by cranking out storyboard-like suggestions on a scene-by-scene basis. But a big piece of the program dealt with some of the ethical issues and potential dangers of unleashing robot content generators on the world. The company took pains to emphasize how it was proceeding cautiously in employing its powerful creations. The most telling statement came from Douglas Eck, a principal scientist at Google Research. “Generative AI models are powerful—there’s no doubt about that,” he said. “But we also have to acknowledge the real risks that this technology can pose if we don’t take care, which is why we’ve been slow to release them. And I’m proud we’ve been slow to release them.”
But Google’s competitors don’t seem to have “slow” in their vocabularies. While Google has provided limited access to LaMDA in a protected Test Kitchen app, other companies have been offering an all-you-can-eat smorgasbord with their own chatbots and image generators. Only a few weeks after the Google event came the most consequential release yet: OpenAI’s latest version of its own powerful text generation technology, ChatGPT, a lightning-fast, logorrheic gadfly that spits out coherent essays, poems, plays, songs, and even obituaries at the merest hint of a prompt. Taking advantage of the chatbot’s wide availability, millions of people have tinkered with it and shared its amazing responses, to the point where it’s become an international obsession, as well as a source of wonder and fear. Will ChatGPT kill the college essay? Destroy traditional internet search? Put millions of copywriters, journalists, artists, songwriters, and legal assistants out of a job?
Answers to those questions aren’t clear right now. But one thing is. Granting open access to these models has kicked off a wet hot AI summer that’s energizing the tech sector, even as the current giants are laying off chunks of their workforces. Contrary to Mark Zuckerberg’s belief, the next big paradigm isn’t the metaverse—it’s this new wave of AI content engines, and it’s here now. In the 1980s, we saw a gold rush of products moving tasks from paper to PC application. In the 1990s, you could make a quick fortune by shifting those desktop products to online. A decade later, the movement was to mobile. In the 2020s the big shift is toward building with generative AI. This year thousands of startups will emerge with business plans based on tapping into the APIs of those systems. The cost of churning out generic copy will go to zero. By the end of the decade, AI video-generation systems may well dominate TikTok and other apps. They may not be anywhere as good as the innovative creations of talented human beings, but the robots will quantitatively dominate.