Matt Perault writes that there is little indication that Big Tech investments in artificial intelligence startups are harming competition. In fact, the opposite is likely true. Antitrust regulators should instead focus their attention on the real threat to AI competition: rules and regulations that will make it harder for startups that to compete with large tech companies.

Editor’s Note: This article is part of a symposium which asks experts to evaluate the anticompetitive harms of Big Tech investments in AI startups in light of recent investigations from antitrust agencies on both sides of the Atlantic. In the coming days, we will publish contributions from Vivek Ghosal, John Kirkwood, and Stacey Dogan.


Since the launch of ChatGPT in November 2022, a new generation of technology companies have gained prominence, with companies like OpenAI and Anthropic now standing alongside the search, social media, gig economy, and video-sharing stalwarts of the tech sector. But will the artificial intelligence ecosystem of tomorrow be as competitive as the AI ecosystem of today? We all share that fear, and the answer is uncertain.

This uncertainty has prompted antitrust agencies to scrutinize the investments that large technology platforms like Microsoft, Google, and Amazon have made in AI insurgents like OpenAI and Anthropic. The Federal Trade Commission initiated a market study of the deals. The United Kingdom’s Competition and Markets Authority is investigating Google and Amazon’s investments in Anthropic, as well as the Microsoft relationship with OpenAI. The European Commission reviewed the Microsoft-OpenAI relationship and continues to monitor it closely even after concluding that it does not constitute a merger.

The AI investments have attracted so much attention from antitrust enforcers due in part to a concern that history could repeat itself. If antitrust enforcers arguably were slow to take action in search, social media, and cloud markets, then perhaps they should be more active in policing AI. With this fear in mind, these investments spurred antitrust agencies into action.

Enforcers should continue to monitor these investments and the competitiveness of AI markets, but focusing on them distracts from the development that poses a greater threat to competition in AI: policy proposals that will make it harder for smaller companies with fewer resources to compete with larger companies with deeper pockets. For those who care about preserving competition in AI, the deals to fear aren’t the ones that occur in boardrooms; they’re the political deals made by legislators and regulators.

AI’s future might be different than tech’s past

Antitrust agencies seem motivated by their anxiety that AI is on a similar trajectory to past technology booms, but AI markets are different.

First, it is obvious that there is active competition not only between Big Tech companies and new AI entrants like OpenAI and Anthropic, but also between the Big Tech companies themselves. Even before the rise of AI, the level of intra-Big Tech competition had been a topic of debate, with enforcers claiming that Big Tech companies operate in separate markets (e.g., Meta does not compete with Google because Meta’s market is social networking services and Google’s is search), and tech companies countering that companies like Google, Meta, and TikTok compete with each other for user attention and for advertisers.

Whatever the merits of each side, the boundaries between Big Tech firms have become blurrier with the emergence of AI. Google, Meta, Anthropic, and OpenAI have all developed and deployed their own large language models. Some of their features are nearly identical—such as the ability to generate images or offering a chatbot to provide information in response to user prompts. Even for features that used to be more distinct, such as the search functionality in Facebook and Google Search, AI has been integrated in similar ways, with both products offering chatbot-type search results.

Companies have responded quickly to new product developments, showing the intense nature of the competition between these companies. Anthropic released its Claude model a few months after ChatGPT’s release, and Google and Meta scrambled to release their own competing models, Gemini (formerly Bard) and Llama, respectively. Google rushed to integrate AI into Google Search soon after Microsoft launched ChatGPT-fueled functionality in Bing. In the recent decision in the Google Search antitrust case, which otherwise found that Google monopolized the market for general search services, the judge referred to Google’s response to Microsoft’s integration of AI into Bing as “a clear example of Google responding to competition.” No firm dominates AI yet.

Unlike some prior technologies, where switching costs might be higher due to network effects, users can easily switch between LLMs. LLMs currently lack social features and do not store much data aside from prompt history, so users do not leave a lot behind when they shift from one service to another. In addition, many users multihome. Entering a prompt in ChatGPT does not prevent a user from entering the same prompt in Gemini or Claude. Some users may use the same prompt for multiple LLMs, then compare results to see which results are better. Someone could even integrate text from different LLMs into a single work product.

Competition also comes from the types of AI models that companies are developing. When Meta launched Llama, it aimed not only to compete on the quality of its LLM, but also to offer an open source alternative to other companies’ proprietary models. Open source offers benefits for competition, as FTC Chair Lina Khan emphasized in recent remarks and as the Commerce Department’s National Telecommunications and Information Administration stated in a recent report. Open source tools lower barriers to entry like the high costs of compute and talent, enabling developers with fewer resources to build products that compete with more better-resourced competitors. For that reason, the investments of Big Tech firms like Meta and Microsoft in open source models—coupled with venture capital funding for open source model development—seem likely to fuel future competition from startups who develop or utilize those models. The investment in open source is one reason that competition is not only fierce today, but also shows no signs of abating.

Microsoft, Google, and Amazon investments may actually make the market more competitive. OpenAI raced to an early lead with the public release of its chatbot product, enabled in part by Microsoft’s investment that provided the company with financial and technical resources. Google and Amazon’s investments in Anthropic likely helped improve the quality of Anthropic’s products, and they also helped solidify a strong, well-resourced competitor to ChatGPT. The Anthropic investments also ensure that leading LLMs use a range of cloud services in their infrastructure, not just Microsoft’s Azure.

Ironically, the investments seem to have enabled the AI upstarts to compete more effectively with Big Tech, even in domains where Big Tech companies have long led. OpenAI recently announced its intent to launch a search product, which will compete with Microsoft’s Bing and Google Search. As AI markets develop, there will likely be more examples of product convergence, driven not only by Big Tech but also by the emerging AI companies. And of course, Microsoft, Google and Amazon have continued to develop their own AI tools that compete with OpenAI and Anthropic. Their investments in other AI firms have not prevented them from also investing in its own AI products. As Microsoft has implied in its public statements about its AI product and business plans, the strategy seems to be to try several different approaches to ensure that they can remain competitive in AI, no matter how the market develops.

Talent acquisitions also have a strong procompetitive rationale. Intense competition to acquire top talent is common in the tech sector, and particularly in emerging areas where the talent pool is more limited, like AI. Those acquisitions strengthen the hiring firm. When Microsoft hired top talent from Inflection, Amazon hired the co-founders of Adept, and Google acquired the co-founders of Character.AI, they improved their own AI teams, which bolsters competition between them.

Acquisitions also fuel competition in the labor market, since the possibility of a desirable role at an established tech company can create additional incentives for employees at smaller firms. The FTC took aim at noncompete clauses to try to increase labor mobility so that “Americans have the freedom to pursue a new job”; it would be odd if the FTC made an exception to this policy objective for AI markets, imposing barriers that make it harder for employees to move into jobs they desire.

One risk of government intervention to restrict the ability of large platforms to invest in smaller AI companies at this stage in the development of the AI market is that the medicine could be worse than the disease. Without financial and technical investments from tech companies and other investors, smaller companies might not have the resources they need to compete with larger ones. And limitations on investments, mergers, and open-source model development would chill VC funding, since VC firms would invest less in AI startups if those companies couldn’t receive cash or in-kind resources from large tech platforms. Restrictions on future investments would also give the AI upstarts that already received large investments—like OpenAI and Anthropic—a massive advantage over potential competitors who could not accept similar investments from similar companies.

Learning, not litigation

Of course, even if the market is competitive today, enforcement agencies should not sit on the sidelines. Enforcers should continue to be vigilant in monitoring competitive dynamics in AI. The European Commission has already taken steps in this direction, announcing that they are continuing to monitor the effects of exclusivity provisions in the Microsoft-OpenAI deal. And enforcers should look at the long-run impacts of talent acquisitions to evaluate whether they harm consumers, such as by reducing the quality of AI products or slowing the pace of innovation.

Enforcers should use this moment as an opportunity to learn. Understanding AI will be a cornerstone of future policy and enforcement activity—including passing new laws and initiating antitrust cases—to ensure that the markets are competitive. When new search, social media, e-commerce, and gig economy companies emerged, legislators and regulators often displayed a markedly limited understanding of the products and the businesses. Thin knowledge makes it difficult to govern the sector effectively, either by passing evidence-based legislation or by bringing cases rooted in a robust theory of harm.

Even if there is no antitrust theory of harm that would justify bringing a case, blocking an investment, or unwinding a merger today, antitrust authorities can be active in using investigations and market studies to learn more about AI technologies and business models. These types of learning exercises can help ensure that the future of government oversight in technology markets looks different than the past.

Enforcers should also continue to scrutinize deal terms and post-deal conduct for issues that might raise concerns, such as exclusivity provisions or restrictions on access to valuable inputs such as API usage, cloud services, or chips. Ongoing monitoring will ensure that if evidence emerges that deal terms or platform policies problematically reduce competition, enforcers can respond.

The real danger to AI competition: public policy that harms startups

Most importantly, regulators concerned about competitive dynamics in AI markets should extend that vigilance beyond corporate investments to focus on the deals that pose the greatest threat to AI competition: agreements between lawmakers that make it harder for startups to compete with larger companies.

There is already compelling evidence of the emergence of political coalitions that will help suppress AI competition. Proposals to require risk assessments, safety disclosures, and even licenses to operate have been floating around Capitol Hill, and many of them have received bipartisan support. Federal agencies have also developed rules to govern AI, taking their cues from the White House Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence. At the state level, Colorado passed a new law requiring developers of “high risk” AI to use “reasonable care” to protect consumers.

While well intentioned, these proposals could reduce competition in AI. Imposing reporting and assessment requirements means that companies will need to divert legal and product resources to these tasks. For instance, the voluntary standards developed by NIST provide useful guidance to companies with the capacity to implement detailed compliance recommendations, but the dizzying number of detailed “best practices” would be challenging for many small companies to discern and implement. Singling out AI for differential treatment—such as by requiring AI-generated content to be labeled, even if similar content developed with other technologies does not require a label —will increase the costs of using AI relative to other tools.

Those costs might be relatively easy for larger companies to bear. Some large tech platforms employ thousands of lawyers to wade through complex regulatory requirements to figure out the optimal path for compliance. They can divert teams of engineers to build new products to easily affix labels to AI content. But many startups would struggle, particularly when devoting substantial resources to compliance means decreasing their investment in the product development and monetization that is critical for them to keep pace with larger competitors.

Concerns about these competitive dynamics have caused some leaders in the anti-monopoly movement to express a preference for regulating how AI is used, rather than how it is developed. In opening remarks on a panel on AI regulation at the Stigler Center’s 2024 Antitrust and Competition conference, David Dayen, the executive editor of The American Prospect and the author of Monopolized: Life in the Age of Corporate Power, emphasized that new AI law is not needed because existing laws can be used to address AI harms, citing a long list of existing laws that apply. Consumer protection laws, civil rights law, and antitrust can all be used to address potential harms from how AI is deployed. “There is no AI exemption to those laws,” he said.

One counterargument is that the value of new law should not be evaluated solely based on its impact on competition. Even if new rules limit competition and increase concentration, perhaps it is worth bearing those competitive costs in order to gain the benefits these new laws might provide. The stated goal of many of the governance proposals is to protect people from potential safety and security harms. To the extent new laws and rules would confer meaningful protections, the benefits of safety improvements might outweigh potential costs to competition.

The problem is that we know very little about how the benefits of these proposals will compare to their costs. While the Office of Information and Regulatory Affairs exists in the executive branch to evaluate the costs and benefits of agency rules, no comparable function exists in the legislative branch to evaluate the costs and benefits of legislation. This analysis would be useful in bolstering the evidentiary foundation that lawmakers use in assessing the tradeoffs of different policy proposals. And it might have uniquely significant value in AI, due to the limited amount we know about the impact of this new technology and efforts to regulate it.

In the absence of this information, lawmakers should be cautious about passing new laws that could harm competition in AI. AI markets are intensely competitive today. If the goal is to ensure they remain that way, policymakers should focus their attention on the deals most likely to harm competition: the deals between lawmakers that will make it harder for startups to compete.

Author Disclosure: Matt Perault is the director of the Center on Technology Policy at the University of North Carolina at Chapel Hill and a professor of the practice at UNC’s School of Information and Library Science. The Center on Technology Policy receives funding from foundations and the private sector, a list of which you can find here. Perault is also a consultant on technology policy issues; among his clients are firms in the tech industry.

Articles represent the opinions of their writers, not necessarily those of ProMarket, the University of Chicago, the Booth School of Business, or its faculty.