Ashutosh Bhagwat argues in new research that expecting social media platforms to serve as gatekeepers for the “truth” flounders on economic, organizational, and democratic grounds. In fact, the end of media gatekeepers and elite control over public discourse may be what is necessary to reinvigorate the marketplace of ideas and reduce political polarization.
One of the most dramatic consequences of the growth of social media as the dominant forum for public discourse has been the collapse of sources of information that are broadly trusted across society. The result has been a flood of mis- and disinformation, which has undermined the very existence of a common case of facts accepted by the general public. This is a troubling development, it is argued, because such a common base is essential to a system of democratic governance. As a result, a number of influential figures today, both in politics and in the media, are urging social media platforms to take on the role of gatekeepers of information, cleansing the information ecosystem of falsehood and reestablishing that common base of facts. This, however, is a very bad idea, both because social media platforms are exceptionally poorly situated to act as effective gatekeepers, and because the very idea of gatekeepers is in deep tension with basic principles of free speech and democracy.
To understand the modern desire for gatekeepers, it is important to know the historical context. The crucial insight here is that through most of the twentieth century, society was dominated by gatekeepers of knowledge and news: the institutional media. Early on, this meant the major daily newspapers. By the 1930s, the central role of newspapers was challenged, and then displaced, by commercial radio stations, which emerged as the critical, and popularly accessible, media institutions (think about President Franklin Delano Roosevelt’s “fireside chats” during the Great Depression). After World War II, and especially starting in the 1950s, radio in turn was displaced by its broadcasting cousin, television—between 1946 and 1951 the number of television sets in the United States rose from 6,000 to 12 million, and by 1955 half of U.S. households owned televisions. Moreover, while there were innumerable local television stations, from the 1940s onwards national news and programming was dominated by just three networks, the National Broadcasting Company (NBC), the Columbia Broadcasting System (CBS), and the American Broadcasting Company (ABC).
This tight control over national dialogue by a handful of actors continued until technological change began to break it down towards the end of the twentieth century. During their heyday, however, figures associated with broadcasting were the trusted communicators for the nation. Two figures in particular stand out, both as it happens associated with CBS. The first was Edward R. Murrow, who first came to fame for his radio reporting in the lead up to and during World War II. Murrow then migrated to television, where his historic 1954 broadcast criticizing Senator Joseph McCarthy for his anti-communist demagoguery helped usher in McCarthy’s fall from grace. Murrow was succeeded by Walter Cronkite, who in 1962 became the host of the CBS Evening News. So dominant was Cronkite’s influence that in a 1972 poll he was named “the most trusted man in America.” Murrow and Cronkite thus epitomized the roles of trusted and influential sources of information within the institutional media.
If we look at the institutional media during its heyday—say from the turn of the twentieth century until the late 1980s, two common characteristics jump out. The first is concentration and scarcity. This was true both for economic reasons, in the case of newspapers and the dominance of the three television networks, and because of physical limitations on the availability of local broadcasting licenses—as the Supreme Court put it, electromagnetic spectrum is a “scarce resource.” The importance of this scarcity was that the institutional media were accepted as gatekeepers because the public had no choice but to do so—other, more marginal voices simply had no access to mass audiences.
The other shared characteristic was an expressed desire to achieve “objectivity,” meaning reporting without opinion or bias. This goal, called “impartiality,” became the industry standard by the 1920s, and was included in the first Code of Ethics of the Society of Professional Journalists in 1926. This embrace of objectivity was clearly driven originally by economic factors. As the institutional media became concentrated, each actor sought to maximize its audience across the political spectrum. Projecting an objective, nonpartisan image was crucial to that quest. In addition, the Federal Communications Commission’s “Fairness Doctrine,” which applied to radio and television broadcasters beginning in 1949, also strongly incentivized impartiality in those industries.
It is crucial to understand, however, that both scarcity and objectivity were historical anomalies. In the eighteenth and nineteenth, newspapers and other printed materials such as pamphlets were plentiful. Newspapers during this era were not “impartial.” They were generally explicitly partisan and affiliated with particular political parties or groupings. As such, newspapers were not particularly effective gatekeepers, and they were certainly not broadly trusted across society.
Today, in many ways, we find ourselves back in the pre-twentieth century world. The FCC repealed the Fairness Doctrine in 1987, resulting in the rise of right-wing, partisan talk radio. The spread of cable television sharply reduced scarcity in the television industry, culminating in the foundation of the explicitly partisan, cable-only Fox News channel in 1996. But it was the rise of the internet, and especially the spread of social media beginning in 2006, which spelled the true end of gatekeepers. In a world in which every citizen became a potential publisher, people suddenly had a choice of what voices to pay attention to. For similar reasons, the range of opinions and “facts” expressed publicly became massively more diverse, and so consequently did worldviews. Political polarization ensured that people embraced those worldviews that reflected their own preexisting views and biases.
One important response to these developments is a chorus of voices, primarily from the political left, urging social media platforms to take on the role of gatekeepers by blocking false or misleading information and minimizing the spread of extreme or polarizing content. Such calls have originated from important political leaders such as Senators Amy Klobuchar and Elizabeth Warren, as well as many left-leaning media outlets. But is it wise to urge or force social media to become gatekeepers (or as Mark Zuckerberg put it, “arbiters of truth”)?
The answer is no, for several reasons. First, social media have no economic incentives to act as responsible gatekeepers. Objectivity and trust benefited traditional institutional media companies by maximizing audiences. But the stock-in-trade of social media is not audience as such, it is engagement; and it has become increasingly clear that what maximizes engagement is not objective content, but rather divisive content that confirms existing biases. Social media firms are for-profit companies and cannot be expected to embrace roles that reduce their profits.
Second, social media firms have absolutely no expertise or training that would enable them to be effective gatekeepers of truth. While social media algorithms are quite good at determining if content is relevant and interesting, it is seriously doubtful if they would be good at determining its adherence to “truth”—as the recent travails of AI platforms such as ChatGPT tend to confirm. In addition, the staff of social media firms are primarily software engineers, not journalists or trained experts, and so have no innate advantage in sorting out truth from falsehoods. Further, when platforms have relied on outside expert consensus to label content as false, the results have been mixed at best: consider the original decision to label the lab-leak theory of Covid-19’s origins as misinformation, or the decision to suppress the Hunter Biden laptop story.
Finally, it is highly questionable whether any gatekeepers are desirable. Are gatekeepers, and deference to “experts” chosen and designated by those gatekeepers, really the best way to identify “truth” and, conversely, misinformation? Basic First Amendment principles strongly suggest it is not. Reliance on gatekeepers seems flatly inconsistent with the famous free-speech metaphor of the “marketplace of ideas” coined by Justice Oliver Wendell Holmes, Jr.: “that the best test of truth is the power of the thought to get itself accepted in the competition in the market.” Gatekeepers, after all, are designed to limit competition among ideas. Gatekeepers also seem inconsistent with another famous free-speech principle, this time stated by Holmes’s colleague and frequent ally Justice Louis Brandeis. As Brandeis said, when faced with false or dangerous speech, “the remedy to be applied is more speech, not enforced silence.” Gatekeepers are designed to silence, not enable, “more speech.” Indeed, more fundamentally, reliance on gatekeepers seems inconsistent with the fundamental role of citizens in a participatory democracy, in which the people, not elite or state institutions, get to make ultimate choices about truth and policy.
In short, perhaps the collapse of gatekeepers is not such a terrible thing after all. Even if we no longer share Holmes’s and Brandeis’s faith in the power of competition and “more speech” to discover the truth, it does not follow that yielding control to elites is the solution. Perhaps, rather than trying to recreate a bygone era, we should be thinking instead about how to reinvigorate the marketplace of ideas and a public discourse that surmount political polarization.
Articles represent the opinions of their writers, not necessarily those of the University of Chicago, the Booth School of Business, or its faculty.