A peculiar feature of WhatsApp groups has made the spread of misinformation much easier. We need better design guidelines for platforms like WhatsApp, ones that can curb disinformation without infringing on user privacy.

 

 

In the 2019 Indian elections, the country’s national parties aggressively used digital platforms, especially WhatsApp, to spread their message with help from an army of dedicated volunteers. The sophistication of their campaigns—which involved the formation of curated groups using data analytics—was described by a former data analyst for India’s governing party (BJP) thus: “Cambridge Analytica probably couldn’t even dream of this level of targeted advertising.”

 

The Indian elections were plagued by misinformation and divisiveness. This was part of an emerging trend: Over the years, rumors and xenophobia have become endemic to WhatsApp and other online platforms. At least 31 people in India have been killed in 2017 and 2018 as a result of lynch mob attacks fueled by rumors on WhatsApp and other social media, a BBC analysis has found. Wikipedia even dedicated an entire page to “Indian WhatsApp lynchings.” In Myanmar, both Facebook and WhatsApp have been used as weapons for spreading hate against ethnic Muslim minorities. While the role of Facebook in Myanmar’s violence was widely reported in the United States, WhatsApp’s role was largely ignored.

 

We cannot ignore it any longer.

 

Facebook has been under considerable scrutiny from policymakers, the press, and academic scholars in the United States and Europe in recent years. Its subsidiary WhatsApp, however, gets significantly less attention in the United States, despite being a massive platform with over 1.6 billion active users worldwide. With Mark Zuckerberg laying out a new vision for Facebook, one centered around encrypted conversations and modeled after WhatsApp, it may be a good time for Americans to start paying attention to WhatsApp, which presents its own complicated set of challenges for policymakers and regulators.

 

The rationale behind Facebook’s plan to shift focus toward encrypted messaging is, in part, to sidestep regulation. So far, Facebook and other digital platforms have been protected from being liable for the content published on their platforms thanks to Section 230 of the Communications Decency Act of 1996, which protects internet companies from being held accountable for the content their users generate or share. But with growing bipartisan support for increased oversight and regulation of Facebook and other tech platforms, and with so much public scrutiny of divisive content spread through social media, that protected status might change. Unable (or unwilling) to control the stream of fake news and disinformation on its platforms, Facebook seems to be making a reasonable strategic choice to shift toward the WhatsApp model.

 

Every conversation on WhatsApp is end-to-end encrypted, and neither WhatsApp, the carriers, nor governments can observe the private conversations between users. End-to-end encryption is an important service that digital platforms can offer to ensure more privacy for users, particularly when it comes to civic activists. In the US, where WhatsApp is less popular, Americans tend to prefer texting through other messaging services like Apple’s iMessage. iMessage is also end-to-end encrypted, but differs substantially from WhatsApp, in that the latter allows the creation of large groups where users can converse in an encrypted environment. In Latin America, Europe and elsewhere, these groups are a common way for users to receive news and updates, which cannot be moderated by WhatsApp due to its encrypted nature. In recent years, this has allowed malicious actors to spread disinformation and fake news stories through the app.

 

A peculiar feature of WhatsApp groups has made the spread of misinformation through the platform much easier: As a default option, any WhatsApp user can add another user to a group without that user’s consent, as long as they have their contact information. Until recently, this was the only option WhatsApp users had, which meant that if you had a contact who is a member with admin rights of a group that occasionally posts extremist content, they could add you to the group without your consent. You could, of course, exit the group after being added, but exiting any group displays a public message that says the user has left. Such a design, where users can be added to a group involuntarily, gives rise to large groups with many passive members who inactively consume content shared by more active, louder members.

 

These features of WhatsApp—the way groups are formed and exited—are seemingly innocuous, but they can have a large impact on how fast information spreads in the platform. Policymakers need to understand these nuances, and lately, the economics profession has been waking up to the accelerating pace of information diffusion. Nobel laureate Robert Shiller, for instance, has suggested studying it through epidemiological models, such as the Kermack-McKindrick SIR model, which models the rate in which infectious diseases spread.

 

How does the presence of large groups with many passive users influence the epidemiology of information? The fact that WhatsApp makes it easy to add members to large groups without consent could be exploited by interest groups, such as political actors or governments, who seek to spread propaganda and disinformation. As long as an interest group has a small number of dedicated WhatsApp “volunteers” (the “infectives”) who can expose a large number of users (the “susceptibles”) to their message with each forward by targeting users in large WhatsApp groups, misinformation can quickly and virally spread across WhatsApp like an epidemic with little accountability. Apple’s iMessage, despite also being end-to-end encrypted, does not support large groups and is therefore not used as a tool for spreading misinformation.

 

“The default option for WhatsApp continues to be the one where users can add any other user to a group without their explicit consent, nudging users towards an ecosystem that is still conducive to the spread of misinformation.”

 

The design features that help spread misinformation through large WhatsApp groups have already led to dangerous consequences, as cases of violence in Brazil, India, and Myanmar demonstrate. Following these disturbances, WhatsApp has begun to tweak its design to limit the spread of misinformation. For example, forwarded messages on WhatsApp now appear with the label “forwarded,” and the latest update can provide users with additional statistics. Similarly, users can now only forward a message five times. However, as WhatsApp has learned, these tweaks are not enough to tackle strategically-motivated misinformants who can make minor changes to their forwarded messages to sidestep these restrictions. WhatsApp now offers users the option of restricting who can add them to a group and the ability to refuse before being added to a group by a stranger. Yet, the default option for WhatsApp continues to be the one where users can add any other user to a group without their explicit consent, nudging users towards an ecosystem that is still conducive to the spread of misinformation.

 

Digital platforms have an incentive to maintain an ecosystem where information goes viral, despite the considerable negative externalities of such virality. While WhatsApp should not become the censor of information, it can nonetheless do a lot more to ensure that its users do not get bombarded with misinformation by making more responsible architecture design choices that take into account the epidemiology of information. The media, policymakers, and users should push the company in that direction as well.

 

Due to safety concerns, countries like India have recently begun to demand that WhatsApp change its encryption to enable the tracing of specific messages, but doing so would infringe on user privacy, which as a principle should be protected. A better way to curb the spread of misinformation on WhatsApp without sacrificing user privacy would be to develop design guidelines, particularly regarding how such messaging apps should handle the formation, entry, and exit of groups, given the key role these groups play in the epidemiology of information. For example, a regulation that requires WhatsApp-like apps to set default privacy settings that maximize user privacy (e.g., requiring consent before adding users to groups) could be an effective move.

 

We need to bring accountability and facts back to the public conversation, but there is no one-size-fits-all policy that will be appropriate for all digital platforms. As the WhatsApp model seems to be Facebook’s preferred model for the future, it is important for policymakers to appreciate how the regulatory solutions that may make sense for Facebook (more moderation) may not work for WhatsApp and could be counterproductive for privacy rights. Meanwhile, until regulators figure this out, better design guidelines and rules can help curb the spread of misinformation and nudge users toward creating a more informed internet.

 

Prateek Raj is an Assistant Professor in Strategy at the Indian Institute of Management Bangalore (IIMB) and IIMB Young Faculty Research Chair.

 

The ProMarket blog is dedicated to discussing how competition tends to be subverted by special interests. The posts represent the opinions of their writers, not necessarily those of the University of Chicago, the Booth School of Business, or its faculty. For more information, please visit ProMarket Blog Policy