In a landmark decision following years of litigation, Google has been deemed a monopolist. Last Monday, U.S. District Judge Amit Mehta ruled that the tech giant violated antitrust law to maintain its monopoly position in the online search market. The monopolistic conduct at hand includes spending hefty sums to be the default search engine on browsers ; for example, Google paid Apple around $18 billion in 2021 to be the automatic search engine on Safari. While the finding established Google’s liability, Judge Mehta has yet to determine remedies, which could include anything from the prohibition of certain practices to the breakup of the business. Google has already declared that it will appeal the decision, upon which its fate will be uncertain.
While the ruling is significant for a variety of reasons — bolstering other antitrust cases against big tech companies and redefining antitrust jurisprudence for modern digital markets — contextualizing the case within the AI revolution and understanding its implications for AI markets is critically important. If it stands, the ruling will dramatically shape how we assess the harms of consolidation in AI markets, create new possibilities for entrants and rivals, and impress upon us the need for a revamp in competition policy to fit the needs of a changing technological and economic environment.
Background
Antitrust enforcement has seen a recent resurgence after decades of relative obscurity following the 2008 crash and evidence of increasing consolidation, higher prices, and stagnant wages, culminating in the Biden administration’s notably aggressive competition policy. Many of these concerns arose from skepticism towards what has been affectionately termed ‘Big Tech’ and what others have less affectionately called the modern-day robber barons, alluding to the Rockefellers and Carnegies of the industrial Gilded Age. In his book The Curse of Bigness, Tim Wu, prominent scholar and former Special Assistant to the President for Technology and Competition Policy has called our era the ‘New Gilded Age.’ Accordingly, the Federal Trade Commission (FTC) and Department of Justice (DOJ) Antitrust Division have ambitiously launched cases against several ‘Big Tech’ firms, including Google, Meta, Apple, and Amazon.
International enforcers have taken a similar posture. The European Union (EU) recently passed the landmark Digital Markets Act (DMA) in 2022, and the United Kingdom (UK) Competition and Markets Authority (CMA) passed parallel regulation called the Digital Markets, Competition and Consumers Act (DMCCA). These historic regulatory moves have aimed to adapt competition law to the new paradigms of 21st-century commerce by identifying key “gatekeepers”—specifically, Alphabet, Amazon, Apple, Bytedance (TikTok), Meta, and Microsoft—and requiring they ensure transparency, interoperability, and data portability (more on that later) in their products.
The AI Revolution
Aside from general concern about the gatekeeping potential in digital markets, burgeoning AI markets are attracting particular scrutiny. For instance, NVIDIA — the semiconductor giant — has recently found itself under the antitrust microscope, as DOJ lawyers investigate whether its possibly $700 million acquisition of Run.ai is anticompetitive. In lockstep, the FTC began investigating the partnership between Microsoft and ChatGPT-developer OpenAI early this year. Notably, Microsoft abdicated its observer seat on OpenAI’s board soon after, likely as a result of this heightened scrutiny. Antitrust authorities, policymakers, and experts alike have expressed profound concerns about how AI could exacerbate economic inequality and entrench the position of dominant firms.
Just last month, US and European antitrust enforcers joined forces to publicly declare their points of concern for competition within the AI industry. The FTC, DOJ, CMA, and European Commission (EC) together outlined how structural features inherent to AI development might facilitate harm to consumers, workers, and businesses. In a New York Times op-ed, FTC Chair Lina Khan describes how access to key inputs — such as vast swaths of data and immense compute power — can serve as an entry barrier in AI markets, as well as how AI might facilitate consumer harms such as collusive behavior, price discrimination, and fraud.
The same companies that have dominated search and social media are the ones that appear to be set to dominate AI. Now, the Google decision is the first of the Big Tech cases to come to a conclusion and it will have massive implications for how antitrust authorities handle potentially monopolistic practices in the AI space.
AI Search Wars
While the DOJ may have succeeded in proving exclusionary behavior on Google’s part in the online search market, the door is still wide open for anticompetitive conduct in the next important industry: generative AI. Reports have pointed to conversations between Apple and Google about using Gemini—the latter’s generative AI model—on iPhones in a manner similar to Google search’s default status on Safari (although it should be noted that Apple and OpenAI have also announced a partnership to integrate ChatGPT into iOS experiences).
Through the exclusionary contracts that Google has built with firms that hold tremendous market power, like Apple, the company has accumulated large pools of data, bringing with it an inherent structural advantage over its competitors. Specifically, there exist economies of scale (efficiencies to production at scale) with data. In economics jargon, this market suffers from ‘indirect network effects,’ meaning that the value of the product to a consumer is increasing in the number of people making use of the product. The more that people use Google’s search algorithm, the more optimized it becomes, which incentivizes even more consumers to use the product. Each day, Google receives nine times as many searches as all of its rivals combined and over 90% of unique search phrases are only seen by Google. Google’s monopoly power—acquired through exclusionary contracts—triggered a self-reinforcing cycle, producing high-quality search results but causing an inevitable market convergence onto its algorithm.
Data as an Entry Barrier
Large data sets are a critical input for AI models. Microsoft CEO Satya Nadella testified at trial that Google could use its large swaths of user data to train its AI models better than any rival, threatening to endow Google with an unassailable advantage entrenching its dominance. In this way, large companies like Google’s existing advantages in access to vast troves of data from their monopolies over domains like search could translate into new monopolies over emerging technologies like generative AI.
Antitrust authorities have long recognized this structural tendency of digital markets for which data is a key input as a worrisome entry barrier with the potential to cement the control of incumbents. As FTC Commissioner Terrell McSweeney said:
“It may be that an incumbent has significant advantages over new entrants when a firm has a database that would be difficult, costly, or time-consuming for a new firm to match or replicate.”
The Google decision sets a good precedent for future cases about the competitive structure of AI markets by legitimizing the potential harms of this feature of the economics of data.
A Changing Landscape
This case also signals shifting tides in American competition policy. Vanderbilt Law School Professor Rebecca Haw Allensworth called the decision “seismic,” saying:
“It’s a sign that the tide is changing in antitrust law generally away from the laissez-faire system that we’ve had for the last 40 years.”
For the last four decades, antitrust jurisprudence depended heavily on classical price theory. In other words, antitrust cases have generally relied on short-term prices as the metric of anticompetitive harm. A direct link to higher consumer prices was the near exclusive means of demonstrating a violation of antitrust law.
But that is not the theory of harm implicated here. Google search is… well, free. However, scholars and policymakers have highlighted that platform markets deserve a unique lens of analysis. For instance, digital platforms often offer low prices on one side of the market (e.g., to consumers) but either sell user data in other markets (a major privacy concern) or extract monopoly rents on the other side of the market (e.g., from sellers). Additionally, by locking out rivals, they preclude the full benefits of free market competition—including vigorous innovation—from reaching consumers; loss of innovation was identified as the principal harm in the Google case. These may very well be the principal set of issues on which cases related to AI turn, signaling an evolution of antitrust law for a new paradigm of commerce.
Legislation and Possibilities
Where might we go from here?
With respect to data at scale as an entry barrier, some have suggested looking beyond current laws and implementing new regulations. For instance, the newly minted European Digital Markets Act requires gatekeepers to provide for data portability. The concept of data portability is that consumers can take their data from one provider to another, in the same way they can take their telephone number from one company to another as a result of the Telecom Act of 1996. This would alleviate the economies of scale issue whereby an algorithm becomes particularly successful by accumulating data from repeated use, creating a convergence on one model. Companies and scholars alike have articulated concerns about how such proposals might negatively impact consumers’ privacy. However, according to the aforementioned letter by the FTC, DOJ, CMA, and EC, such privacy claims would be closely scrutinized. Perhaps an American rendition of the sweeping Digital Markets Act would render these goals better served.
One piece of (bipartisan!) legislation to look out for is the CREATE AI Act, introduced by Senators Heinrich, Rounds, Booker, and Young. The bill would create the National Artificial Intelligence Research Resource (NAIRR), a cloud computing resource meant to provide free or low-cost access to datasets or other computing resources. The Senate Artificial Intelligence Caucus writes, supporting the bill’s passage:
“Companies like Google and Meta invest tens of billions of dollars in research and development annually, and large tech companies dwarf others in their AI investment. Control over the direction of leading-edge AI has become extremely centralized due to the significant data and computation requirements for modern AI. Even well-resourced universities are significantly outpaced by industry in AI research.”
This bill would be a major push in democratizing access to the costly digital infrastructure necessary for building AI models.
Conclusion
Last week’s Google decision is one small part of a story just beginning to unfold. As it stands today, virtually every AI startup and research lab is in some way dependent on the computing infrastructure or consumer market reach of a handful of Big Tech firms. And the potential harms are more pervasive than traditional market concentration. SEC Chair Gary Gensler has warned that reliance on a small number of foundation models at the heart of the AI ecosystem is a systemic risk, where a single failure could spark a financial crisis. But perhaps even more fundamentally, as the AI Now Institute writes:
“Relying on a few unaccountable corporate actors for core infrastructure is a problem for democracy, culture, and individual and collective agency. Without significant intervention, the AI market will only end up rewarding and entrenching the very same companies that reaped the profits of the invasive surveillance business model that has powered the commercial internet, often at the expense of the public.”
At the outset of the Web 2.0 era of the mid-2000s, weak competition policy was unprepared and unsuited to the novel technological environment, resulting in a handful of monopolies dominating the internet. At this outset in generative AI, we ought to be proactive about ensuring those same monopolies do not quash competition, stifle innovation, and further entrench their dominance. This week’s Google decision is a step towards course correcting, but it’s all hands on deck to ensure that AI, with its unimaginable potential, serves humanity and the common good.