• | 9:00 am

What Google’s ChatGPT scare tells us about the company on its 25th birthday

Google’s cautious approach to new tech may be a positive aspect of its maturity, but some believe it comes with some real downsides too.

What Google’s ChatGPT scare tells us about the company on its 25th birthday
[Source photo: Pawel Czerwinski/Unsplash]

Google, founded in 1998 by a pair of Stanford PhDs, turns 25 this year. In those 25 years, Larry Page and Sergey Brin’s company has dominated search and interactive advertising, developed major advantages in artificial intelligence, and become one of the most valuable brands on earth.

But what kind of a company has Google become over a quarter of a century? The obvious answer is that the company has grown huge—its worth is more than $1 trillion, it employs more than 150,000, and it now does everything from self-driving cars to fiber-based phone service to robotics. But a series of events effecting the company in 2023 alone, including a major layoff and a panicked response to ChatGPT, suggest that the search giant has changed in some more fundamental ways.

After swelling in size during the pandemic, Alphabet, Google’s parent, said in January it would shed 6% of its workforce, or about 12,000 people. Alphabet’s stock, after all, had slumped 30% over the past year (echoing the losses of other large tech stocks). The CEO of Alphabet and Google, Sundar Pichai, said the cuts offered a moment to “sharpen our focus, reengineer our cost base, and direct our talent and capital to our highest priorities.”

THE CHATGPT THREAT

By “highest priorities,” Pichai was referring to his company’s effort to imbue its products with artificial intelligence. That priority had come into sharp focus with the surprising success of OpenAI’s ChatGPT, whose astonishingly cogent conversation skills captured much of the world’s attention after its launch in late 2022. This came as a jolt to Google, which invented the kind of large language model that powers ChatGPT and pronounced itself an “AI-first” company in 2017, and yet somehow found itself ceding the mantle of “AI leader” to OpenAI, at least in the public eye.

Google also knew that Microsoft would soon announce that a version of ChatGPT would be built into Bing search. And some early ChatGPT users were already saying that bot-based search is a better way of fetching information from the web than “googling.” This hit close to home, as Google makes much of its revenue from search advertising. The threat was so serious to Google that Pichai called in founders Page and Brin, both of whom had retired from day-to-day management of the company in 2019, to discuss strategies for countering the OpenAI threat.

BETTER THAN “GOOGLING”?

Google, some analysts say, had only made its own search engine more ripe for disruption. Google search results were once a crisp list of links, ordered by relevance, for the user to choose from; now the results are in many cases seriously muddled by ads masquerading as search results.

“If you think about Google, search really hasn’t innovated in about 20 years,” New York University marketing professor Scott Galloway said on a recent episode of the podcast GZEROWorld. “Remember when you initially did a Google search 20 years ago and there were two blue shaded first returns that said ‘ad’? Now they’ve taken those shades away, and sometimes 60% or 70% of the first page is not necessarily a place that takes you to the best answer, but it takes you to a place that Google can further monetize.”

A search bot, on the other hand, might return a more concise answer that’s more aligned with the intent of the search—and one less bloated with ads and self-promotional materials.

“We need a narrower, more trustworthy set of answers,” says Near Media cofounder Greg Sterling, who has worked in the search world since Google’s inception. “There’s just way too much information, and there’s just so much crap online.”

THE INNOVATOR’S DILEMMA

Galloway says there may have been another, deeper, reason. “It’s a classic innovator’s dilemma,” he says. “Google had this [chatbot] technology; Google developed a lot of it, but doesn’t want to undermine or disrupt an unbelievable $150 billion dollar tollbooth business model and give people the best answer. It wants to give them a lot of answers.”

Google had already been developing its own generative chatbots when ChatGPT was released, but reportedly feared incurring liability if one of the bots libeled someone, or violated privacy, or violated a copyright.

Google’s search model is very likely safe in the near term, in part because reaching for Google for search has become muscle memory for millions of people. Also, Microsoft’s new Bing Chat has not yet been released to the online public, and early users have given it mixed reviews, at best. But things can change, warned analyst Ben Thompson in a recent edition of his Stratechery newsletter. “The problem, of course, is that the disruptive product gets better, even as the incumbent’s product becomes ever more bloated and hard to use—and that certainly sounds a lot like Google Search’s current trajectory.”

But Google didn’t respond to ChatGPT by announcing a realignment of its search results, nor did it announce plans for a chatbot within its search engine. The company instead said it would “recalibrate” the level of risk it’s willing to take when releasing new generative AI tools to the public. It then hurried to announce “Bard,” a free-standing generative AI chatbot, one day before Microsoft could announce its ChatGPT-based Bing Chat.

You’d be forgiven for thinking that the ChatGPT episode made Google look flat-footed and reactive. But Tim Bajarin, president of Creative Strategies and a longtime Valley analyst, says Google may have been caught off guard, like many of us, by how quickly ChatGPT caught on. Bajarin says Google knew it would have to have a conversation with the public about generative AI at some point but didn’t anticipate having to do it so soon.

Bajarin says Google’s hesitation to release its own chatbots is not the sign of a sluggish company, but rather of a mature one with real concerns about the safety of the technology.

“This is one of those Pandora boxes,” he says. “If you let that [technology] out without guidelines and guardrails, you have a serious problem.”

“They have their own serious AI ethicists inside Google, and their [issues] are all valid issues,” Bajarin says. “Maybe some of it was driven by legal, which is why it might have taken longer to make a decision.”

Nor was Google’s hesitance on generative AI a sign of some lack of agility or innovation with the company, Bajarin says. Google remains an R&D powerhouse, he adds, and that the company’s advertising and cloud businesses pay for its many talented researchers and engineers.

“They are just a bit more calculated in what they bring to market,” Bajarin says, “[especially] around anything that will have a dramatic impact.”

JUST ANOTHER PUBLIC COMPANY

Google’s cautious approach to new technology may be a positive aspect of its maturity, but some believe it comes with some real downsides too.

For much of its history, Google was known for its progressive and principled culture and leadership. It took the lead among tech companies in reducing carbon emissions. It cared about the work lives of its employees. It created disruptive technology products, like Maps, that made people’s lives easier or more productive. Even its mantra, “Don’t be evil,” suggested a company that took its responsibilities seriously. For many years the company was a magnet for talented young computer scientists from all over the world: It was a fast-growing place with lots of resources where employees could spend time on special projects, or “moonshots.”

But Google is a very different place than it was five years ago, or even two years ago, says Near Media’s Sterling. Now Google is a big company. Even after the layoffs, it employs more than 170,000 people. And it’s a massively profitable company with a $1.15 trillion market cap. But some of its founding ideals have faded, Sterling says, including the way it regards users.

“They still have some sort of consumer-facing mission statement (“to organize the world’s information and make it universally accessible and useful”), but I think the company has veered substantially away from that and they now see the consumer as a means to an end, which is to keep the revenue flowing and growing,” Sterling says.

“There are very well-intentioned people within Google, and Google still is a company that in many ways is more idealistic than a lot of American public companies,” he says. “But it has pretty dramatically changed, and there’s multiple dimensions to that . . . from the productivity of the company to their objectives, to the employee experience and so forth.”

Above all, Sterling says, Google is beholden to the markets, a fact made apparent by the recent layoffs.

“It’s almost symbolic: They’re doing this to show investors that they’re cost conscious and that they’re taking steps to reduce overhead, to improve profitability,” Sterling says. “And the investors respond when the layoffs are announced and stock goes up. . . . This behavior is really reflective of a company that has kind of abandoned many of its core early principles, in terms of its commitment to employees and its culture.”

  Be in the Know. Subscribe to our Newsletters.

ABOUT THE AUTHOR

Mark Sullivan is a senior writer at Fast Company, covering emerging tech, AI, and tech policy. Before coming to Fast Company in January 2016, Sullivan wrote for VentureBeat, Light Reading, CNET, Wired, and PCWorld More

More Top Stories:

FROM OUR PARTNERS