Boycott organizers want Facebook to “pick a side” and align its corporate operations with aggressive activism on issues such as racism and social justice. But picking a side would mean a radical shift in the company’s approach to its product and alienate many of its users.


Hundreds of companies joined the Facebook advertising boycott this month, pulling their ad dollars following the call of a group of civil rights advocates to protest “hate-for-profit” on the social media platform.  

The controversy kicked off when Facebook came under fire for its handling of racist, false and violent posts by President Trump. Many Americans understandably find these posts to be painful. Yet it’s doubtful that meeting the boycott’s demands would produce a better platform.

What should Facebook do?

The boycott’s organizers want Facebook to adopt 10 steps within the next month, including reducing content that’s associated with hate speech and misinformation and removing groups that post that type of content.

The challenge is that the definitions of “hate” and “misinformation” are hotly contested and intensely partisan. For example, while many people find the idea of using the military to suppress protests to be anathema, polls show that many Americans support the idea. And as abhorrent and inaccurate as the President’s posts about mail-in voting seem to me, the relationship between mail-in voting and voter fraud is the subject of active litigation in California. Should social networks make those viewpoints disappear, simply because many people despise them?  

Over the last few years, there have been many calls for Facebook to change. Those calls have focused on a range of issues, but they boil down to this: social networks should take more responsibility for the effect of their products on the world, and their content policies should be more aggressive in promoting “correct” political views. For conservatives, “correct” means less interference with conservative content. For liberals, it means a more aggressive approach to racism, hate speech, and misinformation.  

Because of this partisanship, Facebook can’t meet the boycott’s demands without alienating many of its users. As one New York Times reporter said recently, “They have to pick a side.”  But picking a side doesn’t simply mean changing a few words in the company’s terms. It means a radical shift in the company’s approach to its mission and its product.

Facebook’s original mission was “to make the world more open and connected,” a goal that focused on expanding the pathways for people to connect with each other, but that didn’t explicitly direct the company toward achievement of any particular political or social outcome. Facebook could succeed in connecting the world, but it wasn’t clear if connecting the world would result in less racism, less violence, less intolerance, and less environmental degradation.

As Facebook has built a powerful platform for speech, it has repeatedly tried to distance the views of its employees from the views of its users. As Mark Zuckerberg said in a post explaining the company’s decision on President Trump’s post, “Personally, I have a visceral negative reaction to this kind of divisive and inflammatory rhetoric…But I’m responsible for reacting not just in my personal capacity but as the leader of an institution committed to free expression.”  

This design reflected the “platform” concept that’s been so critical to innovation on the internet. While newspapers, movie studios, and record labels had previously determined what content would be available to the public, the rise of tech platforms enabled any person to share instantly and at scale. For the people who built these products, moving away from gatekeeper models and toward user empowerment meant reserving judgment about political viewpoints that differed from their own. 

The shift from gatekeepers to hosts was expedited by Section 230 of the Communications Decency Act, often referred to as the Magna Carta of the internet. The law shields tech platforms from civil liability for content posted by others, whether or not they moderate it to ensure compliance with their terms of service. By mitigating the legal risk for hosting content, Section 230 gave platforms the incentive to build products that made it easy to post and share information.

But criticism of this model has steadily increased since the 2016 election, and escalated exponentially in the weeks since President Trump used Twitter and Facebook to threaten protestors in Minneapolis.

So what would social networks look like if they decided to “pick a side” and adopt a more activist corporate model?  

It’s possible they might look more like Patagonia, one of the boycott’s outspoken participants and a company that’s known for its strong, bold position on corporate issues.

For decades, Patagonia has aligned its corporate operations with aggressive activism on issues such as social justice and environmentalism, proclaiming “We are in business to save our home planet.” During the 2018 midterm elections, its head of communications tweeted, “the difference with @patagonia’s activism is that we put our logo on it.” And while Facebook has tried obsessively to separate its product from its political views, Patagonia has looked for opportunities to marry the two. 

So what would Facebook look like if it were more like Patagonia? 

First, Patagonia’s product specifications reflect its leaders’ environmental philosophy. Similarly, Facebook could ensure that its products align with its executives’ political views by enacting expanded policies on violence and hate speech to require removing threatening, racist speech by politicians, and exclude publications like Breitbart from Facebook News.  

Second, Patagonia publicly advocates for social causes it supports. Facebook could invest more resources in corporate advocacy on issues like climate change and criminal justice reform. It could withdraw support for organizations that take problematic positions on social issues, such as opposition to gay marriage.  

Third, Patagonia charges high prices so it can offer a better product. Similarly, Facebook could start charging people to use its service, rather than relying on a behavioral advertising model that many blame for the company’s challenges.  

But for social networks, “picking a side” comes with costs. Patagonia’s model works well in retail, but it’s not the right fit for Facebook. 

“If tech platforms adopt content policies that prohibit viewpoints widely supported by one political party, Republicans and Democrats will seek to reward the platforms that align with their political views and punish those that don’t.”

Social networks play an important role in fostering access to diverse viewpoints, as offensive as some may be. To be sure, Facebook’s positions have been inconsistent and sometimes protect more powerful users over others. But the boycott falsely presumes that developing a bright-line rule to remove voting misinformation and violent political speech is simple. It’s not.  

For example, it might seem simple for Facebook to remove a violence-baiting post by President Trump, but should it also remove posts that dispute the value of climate change regulation or that argue that abortion should be illegal, two topics that are divided along party lines but that also have critical implications for human well-being? Where do we draw the line, and on what basis? Even standing up to perceived intolerance can result in swift backlash in our present charged environment.  

People who call for more aggressive content policies assume such policies will result in the removal of the posts they dislike, while preserving the ones they do. But tech platforms deal with vast quantities of information, so their content policies tend to operate more like a sword than a scalpel. More aggressive enforcement might remove ideas that you regard as correct and essential. Ironically, even the boycott’s organizers have benefited from a more open platform model, using Facebook to criticize Facebook.

Stricter content policies might also fuel more partisanship. A world where Democrats use Twitter and Republicans congregate on Facebook (or vice versa) would make it more difficult for people across the political spectrum to empathize with each other. There’s already evidence of this trend, with conservatives flocking to upstart social network Parler in recent weeks.

Conditioning the use of a product on a person’s political ideology is fraught terrain. The boycott’s core call-to-action is “stop hate for profit,” but even Patagonia doesn’t require that its customers adhere to the company’s political views. Any racist or climate change denier can walk into a Patagonia store and walk out with a puffy jacket. 

Picking a side is also likely to make tech policy even more partisan. If tech platforms adopt content policies that prohibit viewpoints widely supported by one political party, Republicans and Democrats will seek to reward the platforms that align with their political views and punish those that don’t. 

That dynamic has been on the rise in recent years, with President Trump calling for antitrust investigations against companies he dislikes and Democrats using tech company decisions to boost fundraising. The innovation economy suffers as a result, as companies contort themselves to address the political crisis of the moment and legislators introduce deeply-flawed legislation in an effort to grab headlines. At this critical moment in tech policy, we need bipartisanship to improve complex areas like privacy and antitrust.

Finally, if Facebook were to follow Patagonia’s lead in charging high prices to offer a better product, the burden would fall disproportionately to low-income communities. As a luxury brand, Patagonia doesn’t need to appeal to the mass market. By contrast, social networks play an important role in democratizing access to speech, information, and economic opportunity. More than 65 percent of people with incomes below $30,000 use at least one social media product. Charging people to use social networks would introduce a problematic new barrier to access. 

Of course, we shouldn’t sit on our hands simply because a more activist corporate model isn’t a good fit for social networks. Even though Section 230 provides civil immunity to tech platforms, it doesn’t prevent litigation against the people who post problematic content, nor does it prohibit liability when tech platforms “develop” content or when they are prosecuted under federal criminal law. To the extent that existing federal criminal law fails to address certain problematic aspects of online activity, Congress can pass new legislation to tackle those harms, as it has in other areas like human trafficking.  

Meanwhile, Congress is actively debating several bills to reform Section 230, such as the EARN-IT Act sponsored by Senators Lindsey Graham (R-SC), Josh Hawley (R-MO), and Richard Blumenthal (D-CT). But bills like the EARN-IT Act would severely curtail online expression, as they would force companies into costly, lengthy litigation to determine whether they have “earned” immunity. And they often also seek to achieve other policy objectives, such as breaking encryption, as a condition of Section 230 protection. A range of tech policy experts—such as Danielle Citron, Benjamin Wittes, and Mary Anne Franks—have proposed more promising and more nuanced alternatives, but their proposals still bear the risk of so significantly increasing the cost of litigation as to require tech companies to dramatically narrow the spaces for speech on their platforms.

If we’re serious about reform that could improve tech products in the long run, one of the boldest steps we could take is also one of the softest: we could gather more information. Like many other debates in tech policy, the discussion that’s surrounded the ad boycott has pretended that we know more about the costs and benefits of different policy frameworks than we do.  If Facebook were to adopt a stricter definition of hate speech, how much problematic content would remain and how much vital political speech would be censored? Do more aggressive policies do a better job of protecting people of color in practice, and if so, at what cost to speech that’s important to these communities?  

It’s also an important time to closely study whether different decisions by different tech companies produced better results. Unlike Facebook, Twitter decided to fact-check the President’s statements. But we don’t yet have clear evidence on whether those fact-checks had any meaningful effect on public opinion and whether they resulted in more or less sharing of the President’s racist rhetoric.

While there have been attempts to analyze these questions, there’s more room in the policy landscape for this type of “regulatory curiosity” that could gather data to inform future policymaking. Rather than implementing definitive regulation for the indefinite future, a regulatory method that’s oriented around curiosity would push us to consider policy tools that take a more inquisitive approach. For example, following other countries’ lead in using regulatory sandboxes would enable us to trial innovative new regimes for a fixed period of time, gather information about costs and benefits, and then implement what we learn in future legislation.

Blunt force tools like boycotts, censorship, and the EARN-IT Act are unlikely to bring us closer to our goal of fostering better social networks. Advertisers and civil rights groups should be commended for pushing Facebook to confront the plague of hatred and misinformation that is an ugly part of our public discourse, both online and offline. But it’s no use pretending that this is an easy task with no downsides.