In his book The Reality Game: How the Next Wave of Technology Will Break the Truth, Samuel Woolley explores how governments failed to foresee the rise and spread of false news, resulting in a lack of laws that would protect the flow of news and govern political communication online.
Conspiracy theory has arguably existed since the beginning of human civilization—that is, since power and rumor first began to affect human discourse. Today, however, social media has created more spreadable, more potent, and farther-reaching conspiracies than those spread by older forms of media. These effects travel offline. Around the globe, the dissemination of social media conspiracy has been followed by violence and death. While it’s important for people to be able to discuss ideas freely, it is not all right—or legal in most democracies—for that discussion to incite violence, spread hate, or perpetuate slander. This is why social media companies must work to stop the flow of such information. A conspiracy theory that cites a bogus secret to argue for a violent response to a certain religion or race, for instance, has no more place on social media than it does on television. More than that, the people who use such flawed logic with the intent to generate violence should be kicked off social media platforms and prosecuted by authorities.
What About Policy?
We are deeply lacking in laws that would protect the flow of news and govern political communication online, whether that communication is aboveboard or hidden and manipulative. In the United States, the Federal Election Commission made a sweeping decision in 2006 to all but ignore political campaigning online. The FEC wrote that only online political ads fell under campaign finance law, but the agency has done a horrendous job of monitoring even that particular space. The government and social media firms have mostly relied—and still rely—on Section 230 of the Communications Decency Act of 1995. This widely misinterpreted policy was intended to divest social media firms of responsibility for the speech that users engaged in using their platforms. If a neo-Nazi wrote an anti-Semitic diatribe online, for instance, the company hosting wouldn’t be culpable. Under Section 230, the US government gave internet-oriented corporations the right to censor harmful communication on their sites. But, crucially, it also passed off to the companies much of the onus of making tough decisions about free speech, without holding them responsible if they made an error in judgment.
Although they sometimes took the relatively clearer route of deleting content associated with violent extremism, the social media companies moderated disinformation and political harassment content in a very piecemeal fashion and took Section 230 as a license to do little or nothing about problematic political content. Executives at the companies pointed to it, confusingly, as evidence for their perennial claim that they were “not the arbiters of truth.” In fact, the statute gave them license to arbitrate content on their platforms, but in the cyber-libertarian ethos that has long pervaded Silicon Valley, they chose not to act. They scaled sites blindingly fast without much consideration for ethical design or, God forbid, measures to prevent political misuses of their tools. There was little to no system for dealing with digital propaganda at any of the major social media firms prior to 2016. Even the system that exists today is ad hoc. In interviews with me, people at Facebook have described their company as a plane being flown while it is only half built.
The US government, particularly the Federal Election Commission (FEC), the Federal Communications Commission (FCC), and the Federal Trade Commission (FTC), has failed to consider the possibility of social media being used as a tool to challenge fundamental democratic ideas. These regulatory bodies have ignored the possibility of these tools undermining not only cogent civic discussion but also parts of the voting process. The FCC and FEC seemed to have barely wrapped their heads around the concept of email just as social media burst onto the scene. The slow-moving leviathan of government failed miserably to regulate these tools, which quickly became the media of choice for people the world over, and their primary means of gathering news. Both the US government and Silicon Valley, to put it kindly, prioritized innovation over ethical rigor and caution. Put more bluntly, they prioritized economic and user growth over democracy. The mythical notion of “scale,” constantly discussed by venture capitalists and start-up employees in Menlo Park, Palo Alto, and Mountain View, won out over human rights.
Internationally, other regulators have made more concerted and educated attempts to deal with the problem of digital propaganda. The European Union (EU) and some of the member states, including Germany, have led the charge, albeit in a rather heavy-handed fashion. In early 2018, German policymakers implemented a law that institutes heavy fines—up to 50 million euros, depending on the offense—on internet-based companies that fail to remove hate speech from their platforms. In 2019, German regulators passed laws that have crippled Facebook’s ad business in that country. In the spring of 2018, the EU rolled out General Data Protection Regulation (GDPR), a consumer data protection law that requires companies like Facebook and Twitter to be more open about what data they have on users and how they make use of it. This law has implications for digital political communication—from individualized political harassment and doxing to political ad sales on social media—but, again, the legislators who penned it often overlooked the feasibility of technical implementation in favor of sweeping reform.
Current policy solutions for new problems associated with the age of data that have been enacted in the EU, Brazil, and other countries—such as GDPR and the Brazilian Internet Bill of Rights (“Marco Civil”)—are moves in the right direction, but some experts have criticized them as too broad, possibly unenforceable, and hewing too close to censorship. In other estimations, governments are applying Band-Aids to societal ailments that amount to internal bleeding. These allegedly simple responses are said to reflect no true understanding of how problems developed or what kind of remedy is needed. In the words of Rand Corporation researchers, policy efforts to fight disinformation use the “squirt gun of truth” to fight the “firehose of falsehood.”
Had they looked to history, politicians and technologists would have known that with great media innovation comes great change. The creation of the printing press in the Middle Ages, for instance, was followed by nearly two hundred years of propaganda operations between the Catholic and Protestant Churches. Social media have allowed for this type of knowledge-based conflict, but with computational enhancement, pervasive anonymity, and the automation of communication. False news has a long history—but those in power and those at the forefront of computer science failed to heed it. Today it seems that pundits and politicians alike are capitalizing on conversations about “fake” news—using the fear and confusion surrounding the internet’s role in causing polarization and distrust to further their own agendas. Consequently, social media companies are continuing to scramble to convince regulators and the public that they are not the arbiters of truth—that they are technology companies, not media companies. They are losing this battle, but governments don’t seem to be picking up the slack.
Excerpted from The Reality Game: How the Next Wave of Technology Will Break the Truth by Samuel Woolley. Copyright © 2020. Available from PublicAffairs, an imprint of Hachette Book Group, Inc.