A new study on three major social networks in China finds that tolerating small, relatively free platforms helps the Chinese government maintain sufficiently high market-level censorship in an overall low-pressure environment. However, larger platforms censor more content than small competitors.
As a political crisis has gripped Hong Kong in the last seven months, videos of hundreds of thousands of marchers poured over Twitter overnight and quickly turned into sympathetic memes. But a search for “Hong Kong” on TikTok reveals “barely a hint of unrest in sight,” despite the fact that the Chinese-owned app has over 500 million active users worldwide and is ranked 9th in terms of social media sites ahead of LinkedIn and Twitter.
Shortly after the initial demonstrations back in June, the Chinese government distributed vague guidelines to digital platforms about what they should “censor.” Platform owners are now responsible for obeying and blacklisting any offending content generated by their users. For example, if a platform chooses to blacklist “Hong Kong protest,” none of its users will be able to deliver any messages containing these keywords, sometimes accompanied by a warning sign saying that “Your message contains sensitive keywords, please try again.”
On the one hand, Chinese platforms are under some political pressure to promptly remove certain user-generated content. If they fail to do so, platforms are subject to a fine and risk being temporarily shut down by the government, a cost that is often tied to platforms’ size. On the other hand, by strategically delaying censorship, a Chinese platform may attract users who try to evade censorship by switching between platforms. This opportunity of user-switching incentivizes each platform to differentiate its timing of censorship—even more so when its competitors are big and committed to censoring early.
My research examines the relationship between online platforms’ size, political pressure, and their compliance with censorship regulations. I find that while large platforms censor more aggressively on average than their small competitors, decentralizing online market power could help authoritarian regimes maintain information control with minimal enforcement.
Using a novel dataset (provided by the Citizen Lab) on platforms in China with a combined market share of over 60 percent in 2015—YY Live, 9158, and Sina Show—I measured each platform’s compliance behavior by examining when and how many keywords the platform has added to its own blacklist following a sequence of salient events. The dataset contains the complete history of blacklisted keywords adopted by each of the three platforms over two years (from May 2015 to September 2017).
I exploited the unexpected occurrence of 30 political and social events during the data collection period, such as the 2015 Tianjin Explosion that killed nearly 200 people and fueled fears of toxic air and distrust of government, or the 2016 international tribunal that ruled against China’s claim to the historic rights of the South China Sea area. Those salient events triggered the government’s censorship request and surveillance, as well as the need for platforms to comply.
I extracted the specific event dates from multiple news sources and cross-checked them with their Wikipedia entries, if available. By comparing the timing and frequency of platforms’ blacklists update over a 2-month event window, I find that platforms of different sizes exhibit different compliance behavior: the largest platform not only censored a higher number of keywords on average, it also complied faster than the smaller platforms.
Two Competing Forces
Motivated by the event-study results, I further developed a structural model where a platform’s profit depends on its own censorship decision as well as that of its competitors, induced by the switching behavior of users with a diverse taste for censorship. Relative to their small competitors, a large platform is often under higher political pressure and thus more responsive to the government’s censorship requests. However, centralizing market power via merging or shutting down small platforms doesn’t necessarily create more disruption in users’ content creation.
If a market hosts fewer platforms, two factors are at play: first, each platform captures a larger market share and bears higher political costs of non-compliance; second, platforms have more strategic incentives to differentiate from other obedient competitors with non-compliance, now that users have fewer options to switch to.
Following this change in the market structure, whether a platform is more or less likely to censor during the next salient event depends on which of the two forces dominates. If even a slight increase in a platform’s size alarms the government and significantly raises its risk of non-compliance, the former political pressure would dominate and generate more censorship in the marketplace. If, on the other hand, limiting the number of alternatives encourages more users to switch between platforms due to sufficiently lower switching or search cost, then the latter strategic incentive would dominate and cause platforms to censor less often in equilibrium.
To quantify the relative magnitude of these two forces and derive meaningful counterfactual predictions, I estimated the model by exploiting variations in platforms’ market share across different events in my dataset.
My counterfactual analysis shows that permanently shutting down a small platform could backfire and lead to an unintended consequence where the overall censorship is lower in the marketplace. This is because if the non-compliant platform is no longer present, the remaining two platforms will share the whole market. They will both obtain stronger strategic incentives to differentiate by not censoring, as they expect to attract more switching users from its now only competitor. When the absence of a non-compliant platform encourages the remaining platforms to comply significantly less often, market concentration is pushing down the overall censorship in the marketplace.
Past research suggested that an authoritarian regime’s chances of survival decline with the number of information sources, unless there are strong economies of scale in information control. To preserve political stability, authoritarian regimes such as China and Russia have always been heavy-handed in regulating private media outlets. With nearly half of the total world population owning a social media account, however, blindly penalizing emerging platforms for non-compliance no longer comes with a negligible cost in this digital age.
My findings indicate that decentralizing online market power may help an authoritarian government maintain sufficiently high market-level censorship in an overall low-pressure environment: tolerating (or even encouraging) a bit of dissent on small platforms allows big platforms to censor more effectively as it mitigates their strategic incentives. This might be one of the reasons why, unlike in the United States, where the market is dominated by a handful of mainstream social media platforms, Chinese social media is still “very fragmented and localized.”
Some Implications for Western Democracies
Beyond China, my findings also offer some useful insights on how to regulate digital platforms in Western democracies. In 2017, Germany’s parliament passed a law requesting social networks to delete hate-speech postings and misinformation within 24 hours, or they would face fines of up to €50 million. The German Ministry of Justice criticized Facebook for not handling user complaints quickly, saying that the company deleted only 39 percent of the criminal content reported by users. Twitter’s compliance was not found satisfactory either: a German government-funded survey claimed that none of Twitter’s deletions took place within 24 hours.
Although most people dislike misinformation and wish to have it removed, a piece of “fake news” takes time to verify and it sometimes becomes the “alternative truth” among many before it is proven deceptive. In fact, creating borderline content has become “one of the few proven recipes for success,” as it brings more engagement “the closer a piece of content comes to violating a platform’s rules.” To counter misinformation, policymakers should pay extra attention to small platforms especially when large platforms are pressured to purge borderline contents quickly.
When two segments of users co-exist—one is quick to identify “misinformation” and the other takes it as the alternative truth—removing the same piece of content pleases the former at the expense of upsetting the latter. If large platforms are expected to be “more responsible” for removing misinformation or to take actions faster, the latter group may disproportionally switch to small platforms that receive less legal attention every time a piece of misinformation turns viral. Subsequently, social media mergers and acquisitions not only affect the parties involved but could also significantly distort other small incumbents’ incentives to comply with the regulation—a distortion that may exacerbate the spread of misinformation and create more “echo chambers” in the long run.
The ProMarket blog is dedicated to discussing how competition tends to be subverted by special interests. The posts represent the opinions of their writers, not necessarily those of the University of Chicago, the Booth School of Business, or its faculty. For more information, please visit ProMarket Blog Policy.