Stanford political scientist Francis Fukuyama, WIRED tech journalist and commentator Gilad Edelman, and law professors Ellen Goodman and Eric Goldman dig into the tension between the harms to democracy from social media platforms and the protections afforded to toxic speech, why antitrust might fall short as a remedy, and solutions beyond litigation and regulation.
The Big Tech and Freedom of Speech panel at the Stigler Center’s 2022 Antitrust and Competition Conference differed slightly but importantly from the rest of the event’s discussions. As the moderator, Binyamin Appelbaum of The New York Times, said in his introduction, “this panel is not about the health of markets, it’s about the health of democracy. Free speech is absolutely necessary to a healthy democracy.”
The discussion, which featured political scientist Francis Fukuyama of Stanford University, journalist Gilad Edelman of WIRED, and law professors Ellen Goodman of Rutgers University and Eric Goldman of Santa Clara University, proceeded from a sense that more competitive markets alone would not be sufficient to remedy the harms caused by social media, and that even antitrust enforcement would likely fall short in a number of ways. The participants divided remedies into two groups—structural vs. systemic—and pitched their preferred examples of both, such as the introduction of competition via third-party filters and the creation of social media professional norms, akin to journalistic standards.
Social media has magnified the volume of information and speech we encounter every day. This dynamic anoints tech platforms with a powerful gatekeeping position. Their content moderation decisions shape our speech and thereby the world. This concentrated power seems at odds with the functioning of a democratic system and the nature of their business models has led to silencing via content removal or chilling of speech due to mobs, epistemic erosion from misinformation, disintermediation of news organizations, especially local news, from their audiences, and profound effects on youth mental health. Professor Fukuyama noted that while political harms are people’s main concerns when it comes to social media and speech, antitrust is only focused on economic harms. Thus, the panelists agreed, we need novel solutions to combat the problem of social media and free speech.
As Ellen Goodman pointed out, “We’ve only been living in the world social media has made for about 12 years.” Yet the last meaningful media legislation, the Telecommunication Act of 1996, was passed 26 years ago, and dealt with the convergence of telecom technologies. Finding the right regulatory framework to deal with harms stemming from social media is deeply challenging, and even if a sound heuristic is developed, she felt the US has little political will to pass legislation or the regulatory infrastructure to implement any of the current proposals, especially compared to the EU.
Of the regulatory proposals that exist, Goodman delineated two main flavors: Structural regulation, which includes breaking up platforms or other changes to business models, and systems regulation, which takes industry structure mostly as it is and instead seeks to influence product design, transparency, data access, codes of conduct. Goodman favored the latter, given that such interventions are content and technology neutral and therefore don’t have many free speech implications and are more future-proof.
This regulatory menu pits “hard law” interventions—such as data protection for users, data access for competitors and the public, and transparency about algorithmic processes—against “soft law” solutions like a circuit breaker to disrupt virality wherein, for example, sharing highly viral material would require an extra step (endorsed by President Obama in a speech at Stanford the prior day), amplification of credible sources, and changing platform design to create more common content.
In delineating these possible options, Goodman noted that “law is only one regulator, and should serve as a prompt for more agile changes.” Gilad Edelman, on the other hand, dispensed with questions of law and even policy altogether, focusing instead on the role of norms. He began his remarks by suggesting that people like himself, members of the traditional media, are left out of the conversation about social media. He argued that social media needs to develop a set of institutional professional norms, akin to those reputable news organizations that adhere to internal rules about fact-gathering, fairness, and accuracy. He pointed out that many critiques of mainstream media, namely that it’s biased, inaccurate, or unfair, are using standards developed by the media to critique the media, which is a good thing. Social media lacks such standards, and also lacks journalism’s separation between business and editorial.
Edelman did not reserve all his critique for social media, suggesting that journalists too need to rethink professional norms when it comes to social media. He asked the audience to raise their hands: how many people in the room use twitter regularly? How many people are on twitter right now? “That’s really weird!” Edelman exclaimed when he saw how many hands had been raised, prompting a big laugh. He then broke down the math: around 14 percent of US adults are daily Twitter users, which would have equated to around 7 people in the room. Around 45 had raised their hands. To Edelman, this dynamic illustrated why we can’t just conceive of this as a market power or monopoly problem. Twitter is small compared to the other platforms in terms of revenue and number of users, but unquestionably very influential because media, academics, policymakers, and the people they pay attention to are disproportionately on Twitter. So, though Twitter doesn’t have much revenue or market power, Edelman called being denied access “a kind of civil death.” He blamed the media for conferring on Twitter an “awful, agenda-setting power” that makes it a crucial tool for participating in debate and democratic self-governance.
All the discussants noted that freedom of speech strongly shapes what interventions are possible or desirable. Fukuyama pointed out that if you ask most people what the problem with the platforms is, they’ll list fake news, hate speech, and conspiracy theories. He countered that this toxic speech can’t be the problem, because it is protected under the First Amendment. He suggested that the underlying problem and the real harm is amplification or silencing of speech, on a very large scale. Government action can result in amplification or silencing, hence why the First Amendment focuses on government limitations on free speech. But this is being done now by platforms, with huge effects on political speech and discourse. The platforms have taken to moderating content without transparency and don’t give control to individual users over what they see. Fukuyama was concerned that we have delegated huge moderation powers to the platforms and then put political pressure on them to control hate speech and remove Donald Trump and conspiracy theories. “Both for normative and practical reasons,” Fukuyama said, “I don’t think the platforms have the legitimacy or the judgment to be the arbiters of what is appropriate political speech in a modern democracy. The analogy is that power to mediate or moderate content is like a loaded gun that you’re hoping the guy on the other side of the table will not pick up and shoot you with.” He suggested that progressives are okay with Mark Zuckerberg and Jack Dorsey running the show, but that Elon Musk’s Twitter bid had illustrated the limits of relying on the beneficence of those in charge of the platforms. “What if a Rupert Murdoch bought one of the platforms in the future?” he prompted, declaring that institutions should be neutral, not dependent on the whims of who is running them.
Fukuyama was also skeptical on antitrust, regulation, and interoperability, a proposed solution wherein personal data could be seamlessly ported between platforms. Antitrust isn’t fit for this purpose, given that it is not focused on the harms to democracy, Fukuyama stated, and doubted that there is sufficient political will. On the regulatory front, he was concerned that privacy policy limiting the power of the platforms would lock in the advantage of the big platforms due to compliance costs, as has been the case with the EU’s General Data Protection Regulation (GDPR). He was skeptical about interoperability from a technical standpoint. Fukuyama was not all critique, however. His solution is “middleware,” a layer of third-party competitive curation companies selected by a user as the way to outsource content moderation and introduce competition. The other panelists weren’t completely won over by this idea, even after a brief demonstration, given that the business model of these providers is unclear, and they would likely require regulatory support, while intensifying content bubbles.
Professor Eric Goldman, too, focused on the nature of content moderation. Moderation is intrinsically a zero-sum game, he argued. Every time a moderation decision is made about something, someone got what they wanted and someone else didn’t, which obviates the possibility of compromises and means that whoever doesn’t like the outcome is likely to question the legitimacy of the process, rendering the problem intractable. Fundamental to progress, according to Goldman, is giving up the idea of a single platonic content moderation solution. For him it follows, that speech-related regulation and enforcement is inherently partisan. Regulation can be weaponized, and as examples he pointed to recent legislation in Texas and Florida, created as retaliatory measures against Disney and internet service providers for speech that speech that Republican lawmakers and officials disliked.
This panel, as much as any other at the conference, illustrated why mitigating the harms from large media platforms is so challenging. As the panelists illuminated from several angles, the platforms have huge power to moderate speech, and a solution must both topple their centrality in moderation while still respecting the broad speech protections afforded by the First Amendment. How to do both while still addressing the myriad harms from disinformation, information silos, and toxic speech was a problem all the speakers recognized, but none could satisfyingly solve.
Read more about our disclosure policy here.