- | 2:00 pm
The age of AI is a time for antitrust
Google’s landmark monopoly trial could usher in a new wave of AI startups—or edge them out.
As Senate Majority Leader Chuck Schumer continues to work on “exceedingly ambitious” bipartisan legislation to regulate AI, he invited Mark Zuckerberg, Eric Schmidt, and Sundar Pichai to a closed-door, closed-press forum at the Capitol this month to help shape the process. That’s a far cry from what most observers hoped would be a more transparent and open process, and to call it concerning would be an understatement.
When leaders of anti-competitive tech monopolies are invited to hold court with one of the most influential leaders in Washington, D.C., bad things usually follow. Just consider how Senator Schumer single-handedly killed the bipartisan American Innovation and Choice Online Act (AICOA) last year after a historic lobbying blitz from Big Tech, which would have created rules of the road for Big Tech companies as they try to dominate new areas of competition like AI.
As AI continues to develop, power and control over it are consolidating, creating a dangerous state of play. A few companies like Google and Microsoft are already leveraging their existing dominance to control critical AI applications and data sets, further limiting consumer choice and tightening their vice grip on tech innovation in America. This is especially true for Google, which holds over 90% of the market share in search in Europe and the United States.
Before we have the chance to collectively decide what AI should—and shouldn’t—mean for creators, our economy, and even humanity, a few companies are slated to dominate the industry and use it to cement their power. As The Economist recently argued, AI is poised to fortify big business, not upend it. Here’s what’s likely to happen unless we change course.
Bard is Google’s artificial intelligence chatbot—their version of ChatGPT, which a Pew survey recently found that only 14% of Americans have used. As Microsoft’s Bing did with ChatGPT, Google is planning to hardwire Bard into its search engine after an initial beta launch, which means most Americans will get their first earnest experience of AI through Google search. The average user likely won’t recognize that Google has silently made Bard their default chatbot either, stifling competition and innovation from the start. After all, how can any AI startup compete with a chatbot that’s embedded within any search query made through the world’s most visited website?
Allowing Google to hardwire its AI technology into its search engine is also a massive risk to the open web. When you search for a business, learn about a new topic, or shop for a product, Google plans to give you the answer it thinks you want—without clear or conspicuous attribution.
That has enormous implications. By remixing copied answers from various parts of the internet un-cited, Google is killing the original function of its search engine: to refer users to other websites. Anyone who puts their own content on the web should be alarmed by the “plagiarism engine” that Google wants to impose on us. And that’s putting aside the genuine concern that the answers are sometimes objectively wrong.
Publishers face a Hobson’s Choice when it comes to “opting out” of services like Bard: Until recently, in order to remove your data from Bard, you had to entirely remove yourself from the index that powers Google’s search engine. If a website opts out, does its historical data Bard has presumably already trained on get purged? Could opting out of Bard hurt a website’s ranking in Google? Google doesn’t say.
Google’s rollout of Bard is the latest chapter of a well-worn Big Tech playbook. Big companies use their incumbent power to kill competition in developing technologies, thereby depriving consumers of choice and undermining innovation in the economy. But that’s Big Tech’s business model. Amazon has a well-documented track record of selling knock-off versions of its best-selling products—undercutting the small businesses that sell on its platform. And Google has been accused of distorting its search results to favor its own “Shopping” service. The poor track record by the largest players to give themselves preferential treatment as a way to extend their dominance is not good news for AI startups hoping to offer fresh services that make us all less dependent on today’s tech giants.
To be clear, Big Tech can take steps toward openness. Meta recently announced its decision to open its large language model for commercial use by small companies. Actions like that are praiseworthy, but they’re few and far between. The more significant trend is the most prominent players locking up all of the market share, leaving no opportunity for nascent technologies. Meta’s move highlights the urgent need for regulatory measures to harness such positive developments, fostering an environment of innovation and fair competition in the AI landscape.
We could have avoided this situation had Senator Schumer called AICOA for a vote. If we had enacted AICOA last year, we would be in a better position to ensure users have choices regarding AI. For example, you could easily imagine a dropdown menu alongside the search bar allowing users to select which AI tool they want, enabling actual competition in the industry. Turkey has similar measures to ensure competing advertisers and sellers get a fair shake on Google’s search engine.
The good news is that we still have a window to get competition policy right. As lawmakers scramble to bring responsible oversight to AI, we don’t have to start from scratch. The most significant antitrust case of the century—U.S. v. Google—kicked off this month, and the recently reintroduced AICOA remains our best chance to reestablish a fair digital marketplace and spur entrepreneurship.
Antitrust reform is not a silver bullet when it comes to governing AI. But when everyone is talking about taking real action and wondering where to begin, it’s an important place to start.