WhatsApp’s new terms of service should come as no surprise. For years, Big Tech has been offering its users these “take it or leave it” terms of service, while eliminating competition and using our data to grow more powerful.
While you were busy following the aftermath of the attempted coup on Capitol Hill, it is possible that you received a message from WhatsApp, asking you to accept its new terms of services. You probably just said yes, without thinking much about it, as we often do.
WhatsApp is owned by Facebook. In October 2019, Facebook’s CEO and founder, Mark Zuckerberg, gave a speech at Georgetown University. The topic was free speech on Facebook, a platform with a global reach approaching three billion people. When it came to political ads, Zuckerberg said that Facebook would still accept them, essentially arguing that the sunlight is ultimately the best disinfectant of lies. “We don’t fact-check political ads and we don’t do this to help politicians, but because we think people should be able to see for themselves what politicians are saying,” said Zuckerberg.
In that 45-minutes-long speech, Zuckerberg made one critical omission. He never mentioned the business model of his company: getting users to agree to vague terms of service, hand over their data, and then use them to sell ads. Only in passing, when talking about paid political ads, did Zuckerberg add: “From a business perspective, the controversy certainly isn’t worth the very small part of our business that they make up.”
We all know that Facebook makes money from advertising. In 2020 alone, its ad revenues were $80 billion. Advertising is targeted to each individual based on the wealth of data Facebook collects about its users via many different sources. These users spend considerable time in the Facebook ecosystem (which also includes Instagram and WhatsApp): about one hour per day on Facebook, slightly less on Instagram, and about 30 minutes on WhatsApp, according to this blog.
This brings me to three fundamental ways I disagree with the assertions Zuckerberg made in that Georgetown speech. First, I would object to his claim that paid political ads account for only a “small part” of Facebook’s business. Facebook’s accounts are opaque, but its own data tell a slightly different story: between May 2018 and September 2020, Facebook earned $2.2 billion directly from political ads.
Second, and more importantly, even if we ignore this direct effect, Facebook is a platform that creates consumer engagement (some would call it addiction). It wants you hooked up on its properties for as long as possible so it can see what you write, like, read, and post. Politics is an essential driver of user engagement. So, the indirect effect of political news is what really matters: it is easy to imagine people debating endlessly about politics, and then being shown an ad from advertisers of products that have nothing to do with politics. At a certain point in time, the algorithm determines that you may be in the perfect emotional state to be shown an ad for that product.
Third, let me discuss the idea of exposing political disinformation to the sun, so that people can see for themselves if news is fake or not. If indeed people were exposed to multiple sources of information, this might be true. The problem, however, is that the nature of the Internet as a source of information, with an open and decentralized structure as it was in the early days, where people can search and judge for themselves, is probably long gone. To be sure, some of us might still do that, and that’s great, but most people do not. They stay in their own filter bubble and are unable to get out of it. In fact, due to the hyper-targeted recommendations, it is impossible for me to know what is being shown to you. Because I am not you.
Trump’s election fraud claims must be seen in this context. The enormous speed and reach of digital platforms and their algorithms, and their resilience in the face of labeling and other sources, is the perfect rebuke of the sunlight argument. And money keeps flowing behind all this: in the days leading up to the inauguration, Facebook has been showing military gear ads next to insurrection posts. YouTube’s algorithm (YouTube is owned by Google and is also funded by ads) also does whatever it takes to keep an individual “engaged”—based on the characteristics of that individual, of course—until the right moment comes to show ads, and there is almost no way for other people that do not have those very same characteristics to check the algorithmic recommendation. Google’s DoubleClick advertising service placed ads on more than 80 percent of the election misinformation sites.
Hyper-targeting allows interested actors (from advertisers to politicians) to have messages that are extremely finely targeted to a very small number of individuals. For instance, a politician can be a different persona to different targets, and these targets would not know. This also makes it much harder to call them out on lies. If you allow people to show a certain ad to just a few people, and then you run tens of thousands of ads, then it makes it extremely difficult for your political opponent and the print media to call you out.
In fact, despite all their obvious differences, Facebook and Google are really close to each other in terms of their strategies. Facebook and Google both hook you in with “free” offerings, profile you, gather intimate details about you, and sell access to your attention to advertisers. You could disagree to their terms of service, but then you’d lose your friends on Facebook and your “free” searches on Google. It’s really “an offer you can’t refuse.”
This situation has come about via the combination of the business model of ad-funded platforms and the market power that these digital platforms have accumulated. This is not a new problem. In 1998, two brilliant PhD students of Computer Science at Stanford University published the article “The Anatomy of a Large-Scale Hypertextual Web Search Engine”. In the Appendix, they wrote:
“It is clear that a search engine which was taking money for showing cellular phone ads would have difficulty justifying the page that our system returned to its paying advertisers. For this type of reason and historical experience with other media, we expect that advertising funded search engines will be inherently biased towards the advertisers and away from the needs of the consumers.”
Here is the irony: The authors of that article were Sergei Brin and Larry Page. We now know how they went from theory to practice: In the last available quarterly results (Q3 2020), Google made over $37 billion from advertising, with the expectation of exceeding $150 billion on an annual basis.
The concentration of parts of the digital economy, including platforms such as Facebook and Google, did not occur in a vacuum. These sectors became concentrated because of economic forces (network effects) but also because of bipartisan neglect of antitrust laws. Facebook and Google grew organically, as they had good value propositions for their customers, but also because we allowed them to grow by acquiring hundreds of other companies. In the past 20 years, the GAFAM (Google, Apple, Facebook, Amazon, Microsoft) collectively bought 1,000 firms, 97 percent of these transactions have not been vetted by anyone, very few challenged, and exactly zero blocked—in the US and around the world as well.
The result is that we missed the impact of competition in so many different ways. In particular, we missed the effect that competition can have on our privacy and our data. For decades, antitrust practitioners have argued that privacy is about consumer protection, not a competition problem. This view is wrong. Competition takes place along several dimensions, with price being only one of them. Quality, choice, and innovation are also important aspects for competition and for consumer welfare.
In fact, when dealing with digital platforms like Facebook and Google, whose business models have been built around supposedly “free” services in order to monetize in other tied markets such as advertising, it does not make much sense to focus a competitive assessment on prices, as these have been set at zero by choice.
Quality, however, will often be the relevant locus of competition. Lack of competition, in many markets, will lead to higher prices and reduced quality. In the context of “zero” prices, the reduction in quality due to monopolization can become even more pronounced. A reduction in consumer data protection and consumer privacy is precisely a fitting example of such a reduction in quality.
Interestingly, in 2019, Dina Srinivasan put forth a detailed case study about Facebook itself and its privacy policies. In the mid-2000s, when Facebook was an upstart social media platform, it tried to differentiate itself from the market leader, Myspace. In particular, Facebook publicly pledged to protect privacy, as decided by its users. Facebook provided privacy and users loved it. And Facebook quickly surpassed Myspace. But as competition began to disappear (also helped by Facebook’s acquisition of Instagram in 2012 and WhatsApp in 2014), Facebook revoked its users’ ability to vote on changes to its privacy policies. Privacy and terms of services became different: take it or leave it. But you have nowhere else to go.
Last week, WhatsApp presented its users with a similar choice: accept new terms or leave. Users who wanted to learn more about the new terms, like myself, were sent on a wild-goose chase through a set of nested sites.((Google’s ToS are not any different.)) My imprecise summary is that, according to the country you live in (including the US), you have to share account information with Facebook and their affiliates. This information includes your account information, logs of how long and how often you use WhatsApp, information about how you interact with others, your device identifiers, IP address, phone number, payments and transaction data, and so forth. These are known as “metadata.” WhatsApp quickly had to clarify that it won’t access encrypted content, but it’s a weak defense—WhatsApp and Facebook can link you to various apps by your personal identifiers (e.g., phone number or device ID), then they can tie your metadata with everything else they know.((It is not even clear what are genuinely new terms, as opposed to something that WhatsApp was already doing and collecting. WhatsApp’s changes of its ToS might also have other legal implications. Facebook explicitly promised that WhatsApp would not be required to share any data with its parent company when it was acquired in 2014. Promising to protect data, only to have promises broken, is quite a classic in digital mergers. When it acquired DoubleClick in 2007 (a key acquisition that lead to dominance in the ad stack space), Google promised to keep its database of web-browsing records separate from the names and other personally identifiable information collected from Gmail and its other login accounts; but later substituted (with no consequences) new opaque language that says browsing habits “may be” combined with what the company learns from the use Gmail and other tools. This is happening yet again with the acquisition of Fitbit by Google that enforcers are approving (only the Australian regulator is showing forceful opposition, but it’s a small country).))
Facebook tells us that all these changes will be used “to operate, provide, improve, understand, customize, support, and market our Service.” But what does it mean? Do you understand? I don’t, and I consider myself moderately tech literate. Sure, some of us might abandon WhatsApp for another service (Signal, perhaps Telegram) but do not be misled by your individual experience: most people won’t move to another messaging app either because of the fear of losing their contacts, or because of their passive behavior, or because of their inability to fathom alternatives and consequences from their actions.
At this stage, you might ask: “So what? It may not look great, but I have nothing to hide. Let Facebook or Google look at whatever they want—ultimately, I get something from them for free.” Not quite: You have also given them quite a few valuable things (your data). Your data will be used to train algorithms to get statistical insights for many purposes, such as targeting political ads and selling things to other people like you. So your acceptance of the terms of service actually exerts a massive negative externality on your lookalikes, people that are statistically similar to you in some domains, but may have very different privacy preferences compared to you.
Or to flip the coin: even if you cared a lot about your privacy, you’d probably still accept the new WhatsApp/Facebook terms since they have already got enough (statistical) information about you anyway, so your decision on the margin won’t have much impact any longer. We are all stuck.
How are we going to get out of this situation? Not easy. The economic and political power of these digital giants is hefty, and they are not going to remain passive. They can shape governments, as we are all well aware by now. But we do have the main elements in front of us to inform future policy choices. It’s the business model that the Google founders anticipated would bring hitches. This business model got worse with market power.
Nobel-prize laureate Paul Romer has suggested introducing a digital tax on ad revenues. I have sympathy for this proposal to the extent that a tax can limit negative externalities: firms do pay when they pollute. But a tax does not address market power. Instead, I believe the business model should be challenged first and foremost by rival ideas and competitors.
These competitors must stand a chance, thanks to active antitrust enforcement, and should not just count on being swallowed by yet another unchallenged acquisition from one of the GAFAM. Competition can emerge over privacy, complemented by meaningful data protection. The solution lies in bringing back decentralization of the internet, as we knew it. This is not nostalgia, but a fundamental principle that goes back to the very roots of America’s antitrust tradition, which began with the passage of the Sherman Act as a means of checking and dispersing concentrated power in 1890.