Antitrust enforcers have tended to stay narrowly “in their lane,” failing to engage with how data is collected and used by digital giants and other actors in the adtech jungle, making them susceptible to “competition washing” and “privacy washing.” Antitrust agencies need to integrate privacy into antitrust and cooperate with data protection regulators if they want to stand a chance to address data-driven harms.
There is a market power crisis and a privacy crisis, and they compound each other. The collection and cross-usage of personal data by data-driven firms, in defiance of data protection rules, enabled the creation of extraordinary power. Not only is how these data-driven businesses collect personal data often unlawful, but data collected in one part of the conglomerate is often used to advantage other parts, creating cascading monopolies that roll from market to market. Data protection regulators (where they exist) are overwhelmed by market power they are not equipped to confront.
Yet the problem does not just lie with poor enforcement of data protection rules like GDPR in Europe: it lies also with antitrust regulators who have stayed narrowly “in their lane,” failing to engage with how data is used and update “theories of harm” to grapple with data as the source of market power.
Views that “the pursuit of privacy is not a goal of antitrust, nothing to do with us” and “there is tension between privacy and antitrust” unfortunately still linger, and have allowed antitrust agencies to “look the other way.” Siloing the respective work and thinking of antitrust and privacy regulators has been disastrous.
Three main issues explain antitrust agencies’ reluctance to engage. First, classically trained economists at the agencies are trained to think “but more data is always good! Our models tell us that perfect information is great! A company with more data can produce novel things that benefit consumers!” That more data is good for innovation, and privacy is “someone else’s job,” are strongly engrained views among purist enforcers, who are reluctant to think of privacy degradation as something that hurts consumers directly: like a price increase. Second, there is still residual credit given to the old notion that “consumers don’t really care that much about privacy” (the “privacy paradox”)—even though consumers take privacy-protecting actions online all the time, and empirical research points the other way. And third, the reality of antitrust enforcement is that it is codified in a surprisingly small number of set dance routines that draw from economics but have become ossified in the last few decades: price increases (sometimes quality decreases), vertical foreclosure, and conglomerate “leveraging” from one well-defined market into another.
There is little theory, empirical work, and legal precedent to fashion exploitative cases around data extractivism. Classic IO academic research has other priorities for publication and is not focused on this.
Merger Control: the Enforcers’ Data Gap
When confronted with deals involving data-driven conglomerates (think of Google/Fitbit, or Facebook’s acquisitions of Giphy and Kustomer), antitrust agencies apply the standard dance routine: is there an “overlap” in activities? No? Then we don’t need to worry about loss of horizontal competition. Is there concern about foreclosure of rivals (e.g., by manipulating an input)? Let’s get an undertaking they cannot degrade the input. What about data? We will mandate the target’s data is not used for advertising. But what about the data being used to leverage power in other applications and extract surplus from consumers with discriminating offers? Discrimination is good in economics! What theory of harm would that be? We can leave it to ex post enforcement, and to the data protection regulators.
Except one can’t just “leave it,” because data protection regulators are floundering, and ex post antitrust enforcement has proven to be slow and ineffectual, with little appetite for pursuing “exploitation” as a theory of harm (most standard economic theories of harm are based around “exclusion” concerns, where extensive precedent since Microsoft also makes enforcers less nervous). Without these ex-post safety nets, enforcers should get it right the first time. Feeble, inapt behavioral remedies leave open a vast gap through which a data-driven firm can storm.
A starting point for antitrust enforcers hoping to understand how the target’s data can be combined with the buyer’s, and possibly used to leverage power into further applications, has to be understanding how data are currently used by buyer and target. Enforcers can get closer to this using concepts and tools from data protection law that allow for forensic analysis of what companies actually do with data.
The key foundational unit for how organizations use data is the “processing purpose.” A “purpose” is the individual thing that an organization can use personal data for, and is intended to be limited to the use(s) of data foreseen and anticipated by the person concerned at the time the data were collected. For instance, users may be asked for their mobile phone numbers to improve login security, but if the data is reused for other purposes—like targeted ads—that additional use is incompatible with the original reason and infringes the GDPR’s purpose limitation principle. Under European Court case law, the scope of a processing purpose is limited to what a person can reasonably foresee.
An agency should demand the complete list of processing purposes from buyer and target. There must be a specific “legal basis” for each purpose for which a piece of personal data is used, and it is a legal requirement that this is made clear to consumers. Vague phrases such as “improving users’ experience,” “marketing purposes,” and “we may use your personal data to develop new services” are explicitly ruled out under GDPR. Nor can multiple purposes be bundled together, with a person forced to accept all.
One important step in merger analysis should then be to take one forensic sheet of everything that the acquirer is using data for, and another of everything that target is using data for, to anticipate what may happen when those two combine. Enforcers persist with canonic narrow “market definitions,” while data-driven companies operate internal data free-for-alls that have no boundaries between markets. Purpose forensics can help understand what they might do when they add data from a target, and this is something analytics privacy experts can help with.
“Competition Washing” vs “Privacy Washing”
Antitrust agencies are receiving complaints against Big Tech firms for introducing privacy measures that limit third parties’ access to data. Google and Apple have announced changes that—in very different ways—may end up with advertising technology firms no longer having use of the data that they have relied on to build profiles of consumers. In response, these firms and their trade bodies have complained to competition enforcers that there is “self-preferencing” at play. Again, antitrust agencies need to engage with the data protection community to navigate a sensible course.
Complainants in these cases typically want to preserve the existing data free-for-all in which thousands of actors track what every individual consumer views online, what apps they use, and how. This vast free-for-all infringes Europe’s and California’s data protection laws. Data gathered in this way might be used by an algorithm that decides to remove someone from the shortlist for their dream job, because it knows they have a gambling problem, a health concern, or the “wrong” politics. The data also allow for billions of dollars of fraud in the online advertising industry, as the UK Competition and Markets Authority (CMA) found. Advertisers and legitimate publishers are also paying entirely opaque fees.
What Google plans to do with Privacy Sandbox (eliminating third-party cookies used by digital advertisers to track users across the web, ostensibly in an effort to still achieve personalized advertising without compromising privacy quite so much), and what Apple has done with its App Tracking Transparency initiative (explicitly asking iPhone users whether they are willing to authorize apps to track them across the web, with the default being “opted out”), may reduce these harms. Antitrust authorities need to be careful to side with complaints dressing up as a competition concern in an effort to protect a harmful “data-free-for-all.”
The approach taken by France’s antitrust agency, the Autorité de la concurrence (ADLC), is a good example. In October 2020, the ADLC received a complaint against Apple’s ATT initiative (since implemented in April 2021 with the launch of iOS14), which was aiming to give consumers the ability to decide whether an app could use an identification code to track their behavior across the Internet, beyond what the person does on the app. The complainants said that because Apple was being less clear that the same standard applied to its own apps, introducing this choice for consumers selectively on third-party apps was abusive.
The ADLC did the sensible thing: it consulted with its sister agency, the Commission Nationale de l’Informatique et des Libertés (CNIL), which oversees data protection and privacy. Taking into account CNIL’s carefully-worded opinion that ATT is in line with GDPR rules and “its pop-up differs positively from other interfaces”—though more work was needed to establish whether Apple applied a different standard to itself—the ADLC did not adopt Interim measures as the complainants asked. It took instead the preliminary view that “the introduction of the ATT framework does not appear to reflect an abuse of a dominant position, leading to imposing unfair trading conditions.” The ADLC vowed to continue to investigate whether Apple applies privacy protections equally to its own data collection, and Apple will have issues to address if, for example, it were found to significantly build up its own advertising business, using tracking data without confronting consumers with the same choice that deprives competitors of the data.
This is a rare example of smart cooperation between data protection and competition agencies, and a good model.
“Privacy Washing” Big Tech’s Internal Data Free-For-All
While antitrust agencies should be wary of “competition washing,” they should also be wary of “privacy washing.”
Since November 2020, the UK CMA has been investigating a complaint against Google’s “Privacy Sandbox,” whereby Google plans to prevent the placement of placing cookies on its “Chrome” web browser and introduce a new advertising technology that may prevent data from leaving the browser. As Chrome dominates the browser market, the complainants claim this is a “privacy washing” ruse to force Google’s competitors and their clients to conduct all of their business through Google, and deprive them of data to operate their businesses independently. The CMA has been investigating, and announced it was consulting on agreed undertakings just as Google announced it would delay the Sandbox changes for two more years.
While here, again, the complainants are likely “competition washing” their plea in order to continue the data free-for-all, there is a legitimate concern that Google is “privacy washing” its internal data free-for-all, too: while Sandbox may reduce the external data free-for-all between thousands of smaller advertising competitors, Google’s internal data free-for-all (well documented) may be unaffected.
The delicate issue for antitrust agencies is to avoid protecting the external data-free-all on competition grounds, while taking action against both the external and the internal free-for-all to ensure proper functioning of the digital market. Antitrust agencies should again engage with data protection experts to tackle this. “Purpose limitation,” a key concept in data protection, can help constrain the cascading of market power by limiting the use of data to the purpose for which it was originally collected. If enforced, it would prevent a company from automatically opting users into all of its products and data collection, and could be a strong complement to competition enforcement that recognized theories of harm about the extension of data-based market power.
A Seat at the Table
Data protection agencies are starting to take their place at the antitrust table. As the ADLC and CNIL have shown in France, there is scope for productive joint work. There is cooperation also in Germany, where the Bundeskartellamt (BKA) treated Facebook’s infringement of the GDPR as a market power abuse, and key questions on this interplay have been referred to Europe’s highest court. In the UK, the CMA and ICO (the sister data protection agency), have collaborated in assessing Google’s Sandbox and issued a Joint Statement committing to work together. In the US, complaints by US State AGs against Google and Facebook treated privacy degradation as an exercise in market power. President Biden’s Competition Executive Order and new leadership at the FTC and DOJ also appear favorable to integrating privacy into antitrust. Antitrust leadership in the UK, Germany, France, and the US have recently affirmed that privacy and data protection are now a priority. The antitrust agencies need to make a systematic and routine effort to integrate privacy and antitrust to avoid blunders such as Google/Fitbit, and stand a chance to address data-driven harms.
Disclosure: Cristina Caffarra is a Senior Consultant to Charles River Associates in Europe and has been an advisor on antitrust matters to both companies and government agencies both for and against tech platforms. Current or recent clients include Apple, Amazon, Microsoft, Uber, and others. She has not consulted for any parties on the matters discussed in this piece.