Facebook and Google have failed at self-regulation. If we want to avoid an authoritarian future, we need to reduce the influence of internet platforms, writes early Facebook investor Roger McNamee, author of the new book Zucked.
Nearly three years have passed since I first observed bad actors exploiting Facebook’s algorithms and business model to harm innocent people. I could not have imagined then the damage to democracy, public health, privacy, and competition that would be enabled by internet platforms I loved to use. If you live in the United States, United Kingdom, or Brazil, your country’s politics have been transformed in ways that may persist for generations. In Myanmar or Sri Lanka, your life may have been threatened. In every country with internet access, platforms have transformed society for the worse. We are running an uncontrolled evolutionary experiment, and the results so far are terrifying.
As people, and as citizens, we were not prepared for the social turmoil and political tumult unleashed by internet platforms. They emerged so quickly, and their influence over both person and commerce spread so rapidly, that they overwhelmed cultural, political, and legal institutions. Some will be tempted to relax now that the 2018 midterm elections have come and gone without obvious foreign interference. Conscientious citizens may stop worrying, comfortable that policymakers are now on the case. Instead, I hope they will see that foreign meddling in campaigns is merely one symptom of a much larger problem, a problem for which the internet platforms themselves—and nobody else, in or out of government—must be called to account.
In his brilliant book The Road to Unfreedom, Yale professor Timothy Snyder makes a convincing case that the world is sleepwalking into an authoritarian age. Having forgotten the lessons of the twentieth century, liberal democracies and emerging countries alike are surrendering to autocratic appeals to fear and anger. Facebook, Google, and Twitter did not cause the current transformation of global politics, but they have enabled it, sped it up, and ensured that it would reach every corner of the globe simultaneously. Design choices they made in the pursuit of global influence and massive profits have undermined democracy and civil rights.
“It is no exaggeration to say that the greatest threat to the global order in my life time has been enabled by internet platforms.” |
Let me be clear. I do not believe the employees at Google, Facebook, or Twitter ever imagined their products would harm democracy in the United States or anywhere else. But the systems they built are doing just that. By manipulating attention, isolating users in filter and preference bubbles, and leaving them vulnerable to invasions of privacy, loss of agency, and even behavior modification, internet platforms inadvertently created a weapon for those who would impose their will on the powerless. Facebook’s impact on the 2016 US presidential election and on the fate of the Rohingya in Myanmar are not isolated events. They are glaring examples of a global problem for which we lack a solution.
The same can be said for the spread of conspiracy theories and enabling of extremism on YouTube. The single–minded pursuit of growth by corporations that do not believe they should be held accountable for the consequences of their actions will always produce undesirable side effects. At the scale of Facebook and Google, those side effects can transform politics and public health for the worse.
It is no exaggeration to say that the greatest threat to the global order in my life time has been enabled by internet platforms. If we want to avoid an authoritarian future, we need to reduce the influence of the platforms that enable authoritarians to impose their will. Whatever good comes from Facebook, Google, and their subsidiaries cannot justify harm to billions of people and destabilizing important institutions—the press, electoral systems, and international systems of governance—that protect the innocent. Playing nice with the internet platforms has not worked. It is time to get serious. As home to the internet platforms, the United States has a responsibility to rein them in. It will not be easy, but there are at least two paths.
The most effective path would be for users to force change. Users have leverage because internet platforms cannot survive without their attention. In a perfect world, users would escape filter bubbles, not allow technology to mediate their relationships, and commit to active citizenship. The midterm elections provided evidence that an increasing number of users in the United States are prepared to make these changes. But the midterms also demonstrated that democracy cannot be restored in a single election cycle. A large minority of Americans remain comfortable in an alternative reality enabled by internet platforms.
The second path is government intervention. Normally I would approach regulation with extreme reluctance, but the ongoing damage to democracy, public health, privacy, and competition justifies extraordinary measures. The first step would be to address the design and business model failures that make internet platforms vulnerable to exploitation.
Government intervention should also address all the known harms from the use of internet platforms. Immersed in technology, kids do not develop normal social skills. Teenagers use social media technology to bully one another. Adults seek out the warm embrace of filter bubbles that erode critical–thinking skills. Seduced by convenience, users surrender personal data for ever less benefit. The platforms themselves have failed to safeguard personal data or provide reasonable privacy and security to their users. Consumers have no comparable alternative to Facebook and Google, which use their extraordinary market power to eliminate competition while also undermining the business model of journalistic voices that might threaten them. More than two years after I first shared my concerns with Mark Zuckerberg and Sheryl Sandberg, Facebook has issued apologies and promises to do better. To date, the company has made few substantive changes.
As much as I would like to protect what I like about internet platforms, I am prepared to give up the good to eliminate the harm. We must use any and every tool available to us. Policy makers, who have been hesitant to take action against companies that were universally loved as recently as 2016, are beginning to step up. They must use every tool in the regulatory tool kit. We do not have much time.
Facebook and Google have failed at self–regulation. Hardly a day passes without a new revelation. Facebook reported that hackers had penetrated its system and stolen identity tokens from twenty–nine million users, gathering extensive personal information on fourteen million of them. The hackers gained the ability to impersonate those users on other internet platforms without detection, making this by far the worst security failure by Facebook yet revealed. Facebook reported that the hackers may have been scammers, which would represent an escalation of that threat. Facebook’s WhatsApp subsidiary was blamed for hate speech and election interference in many countries, and may have played an outsized role in Brazil’s election of a right–wing presidential candidate who promised to end democracy. The Washington Post reported that a firm called GIPEC had uncovered numerous advertisements for illegal drugs on Instagram. Then, in a moment of Olympic-class tone deafness, Facebook introduced Portal, a video–messaging device for the home. Incorporating Amazon’s Alexa voice technology and other surveillance tools that will be used to target advertising, Portal will test buyers’ trust in Facebook.
In the two months before the midterms, Facebook ramped up its efforts to limit the impact of foreign interference, disinformation, and other forms of subterfuge. The company opened a “war room” to demonstrate its commitment and introduced tools to protect political campaigns and their staffs from hacking. These actions demonstrate an acknowledgment of the corporation’s responsibility to protect democracy. Regrettably, Facebook appears to have compiled a list of past failures, which it is addressing in sequence, rather than going after the root causes of its vulnerability—its business model and algorithms—or anticipating new threats. Facebook faces a really hard problem—a variant of the game Whac–a–Mole—but it has no one but itself to blame. The proof of this came in October, when Vice attempted to place ads in the names of all one hundred US senators, the vice president, and ISIS . . . and Facebook approved them all.
The sad truth is that even if Facebook can limit overt political manipulation on its platform, it would still pose a threat to democracy. Filter and preference bubbles will continue to undermine basic democratic processes like deliberation and compromise until something comes along to break users out of them. In addition, behavioral addiction, bullying, and other public health issues would remain. We would also be wrestling with pervasive loss of privacy and online security. The economy would still suffer from the anticompetitive behavior of monopolists.
From ZUCKED: Waking Up to the Facebook Catastrophe by Roger McNamee. Reprinted by arrangement of Penguin Press, part of the Penguin Random House company. Copyright (c) 2019 by Roger McNamee.
Disclaimer: The ProMarket blog is dedicated to discussing how competition tends to be subverted by special interests. The posts represent the opinions of their writers, not necessarily those of the University of Chicago, the Booth School of Business, or its faculty. For more information, please visit ProMarket Blog Policy.