Mark MacCarthy writes that the case law supports Federal Trade Commission Chair Andrew Ferguson’s charge that collaboration by social media companies on content moderation practices would be anticompetitive collusion. However, the author argues that open and transparent cooperation might actually benefit a troubled internet, and Congress should consider carving out a content-neutral antitrust exemption for platforms in the way it has in the past for broadcast networks.

This article is part of a series that explores the ways in which the Federal Trade Commission can use its powers to protect free speech.


Under the new leadership of Chair Andrew Ferguson, the Federal Trade Commission has issued a request for information raising fundamental issues about the role of consumer protection and antitrust in preserving free speech on social media.

Based in part on an earlier concurring statement from FTC Commissioner Melissa Holyoak in December 2024, the request for information lists some abusive social media content moderation practices, such as opaque internal procedures, failure to provide information about alleged standards violations, and a lack of meaningful opportunities to appeal content moderation decisions. These practices, the notice says, may violate platforms’ terms of service and flout the reasonable expectations of the platforms’ users. In effect, the agency is suggesting that failure to have a transparent and accountable social media content moderation program is an unfair or deceptive act or practice.

In my book Regulating Digital Industries, I urged Congress to adopt such a consumer protection approach to content moderation regulation. It seems to embody the wise principle that a social media company should say what it will do and do what it says with respect to its content moderation activities.

I have also previously argued that the agency should establish and enforce such a regime under its existing authority to prohibit unfair and deceptive acts and practices. But given judicial reluctance as expressed, for instance, in Loper Bright Enterprises v. Raimondo, to countenance the extension of agency authority in areas where a statute is unclear, it now might be prudent to wait until Congress clearly grants the agency this authority. However, the agency will not lose anything if it moves in this direction under current leadership.

Ferguson’s concerns about social media content moderation practices extend beyond consumer protection considerations to matters of anticompetitive practice. In his own December 2024 concurring statement, he warned that “If the platforms colluded amongst each other to set shared censorship policies such an agreement would be tantamount to an agreement not to compete on contract terms or product quality.”

His recitation of the case law is persuasive. The 1986 Supreme Court decision in FTC v. Indiana Fed’n of Dentists concerned a dentist association that imposed a public rule on its members barring them from providing X-rays to their patients or to insurance companies for fear that the insurance companies would reject the claims and patients would be deprived of needed services. The court said such an agreement “cannot be sustained” under the rule of reason that guides the interpretation of the Sherman’s Act’s prohibition of agreements in restraint of trade.

An earlier 1978 Supreme Court ruling in National Soc’y of Prof. Engineers v. United States concerned the canon of ethics of a professional engineering society that prohibited its members from submitting competitive bids for engineering services for fear that the competition would produce inferior engineering work endangering the public safety. The court rejected the agreement as anticompetitive, saying the justification that competition might endanger public safety was “a frontal assault on the basic policy of the Sherman Act.”

These cases support the claim that a concerted effort by social media companies to set and enforce common content moderation standards would be anticompetitive collusion.

Of course, there is no evidence that the social media companies have engaged in this kind of collusion, certainly not in the public and proud way that trade associations did it in the cases Ferguson cites. There is no industry organization developing and enforcing content standards. The platforms regularly exchange information in connection with spam, child porn, terrorist material and cybersecurity threats. Their content decisions in these areas and others might count as conscious parallelism, but they are hardly a conspiracy in restraint of trade. The February request for information might produce evidence of secret collusion among social media companies, but it seems extremely unlikely.

But would open and public collaboration on content standards really be a bad idea? In an article on broadcast self-regulation, I describe how the National Association of Broadcasters maintained and enforced a content code from the 1930s until they abandoned it in the early 1980s under antitrust pressure from the Department of Justice. However, in the absence of the code, in 1990 Congress granted the broadcasters a temporary exemption from the antitrust laws to deal with TV violence. When the companies failed to act, in the 1996 Telecommunications Act Congress required the Federal Communications Commission to prescribe “guidelines” on video programming that contains “sexual, violent, or other indecent material about which parents should be informed before it is displayed to children.” The FCC was to act only if the industry did not. In January 1997, the industry adopted a voluntary content code which the FCC found to be acceptable.

This is the content rating system that is used today in the advisories that appear on the upper left hand corner of most video programming and that powers the V-chip mandated to be installed in TV sets.

This history of broadcast self-regulation and Ferguson’s concurring statement remind us that it would take an antitrust exemption for social media companies to come together to develop common standards against the hate speech, disinformation and other harmful material that mar online discourse. This is not needed because of past collusive conduct, for which there is no evidence, but to make socially desirable collaboration possible in the future.

Of course, to avoid free speech issues, such an exemption would have to be content neutral. In its 2024 decision Moody v. Netchoice, the Supreme Court vindicated the full First Amendment rights of social media owners. It would be unconstitutional for Congress to grant platforms an antitrust exemption to collude against just the speech disfavored by conservatives, for instance by granting the antitrust exemption only if social media companies use it to develop standards against content praising diversity, equity and inclusion programs.

But a neutral exemption from the antitrust laws for social media companies to construct and operate a voluntary code of speech might be just the thing to restore order to a disordered online information space. It would prevent dangerous competition in which social media companies distribute harmful content to attract and hold a social media audience, regardless of the effect on democracy, public health or child safety. Congress should consider hearings on developing such an antitrust exemption and moving a bill embodying such an exemption through the legislative process. Rather than investigating social media platforms for collusive content moderation practices, the most beneficial policy the government can pursue is to carve out an antitrust exemption for them.

Author’s Disclosures: The authors report no conflicts of interest. You can read our disclosure policy here.

Articles represent the opinions of their writers, not necessarily those of the University of Chicago, the Booth School of Business, or its faculty.