Maurice E. Stucke from the University of Tennessee Knoxville and Ariel Ezrachi of Oxford University explain how big data and artificial intelligence can be used to facilitate collusion and potentially harm consumers. The first part of a two-part interview.

Ariel Ezrachi
Ariel Ezrachi

The Economist devoted its cover page last week to a remarkable project on the dark side of the superstar economy: namely, the rapid rise in concentration and anti-competitive conduct, particularly in the US.

The tech sector, the magazine noted, is at the forefront of this rise in concentration. Once a harbinger of a new kind of capitalism, Silicon Valley has since become a place where “a handful of winner-takes-most companies have taken over the world’s most vibrant innovation centre, while the region’s (admittedly numerous) startups compete to provide the big league with services or, if they are lucky, with their next acquisition.” 

Powerful tech companies, it warned, have used economies of scale to become giants, shifting their focus “from the supply side (production efficiencies) to the demand side (network effects).” “The superstars are admirable in many ways,” the magazine cautioned, “but they have two big faults. They are squashing competition, and they are using the darker arts of management to stay ahead.”

A lot of attention has been devoted in recent years to the growing monopolization of online markets like search, web advertising, mobile operating systems, and online retail. Google is currently facing three antitrust cases against it in Europe—where it holds a 90 percent market share in online search—after being accused by the European Commission of abusing its dominance in mobile and search to favor its own products over rival services. 

What if this isn’t the case of a single company misbehaving but a fundamental and lasting change in the way competition works in the digital economy? In a series of papers from the last two years, Maurice E. Stucke from the University of Tennessee Knoxville and Ariel Ezrachi of the University of Oxford argue that in the world of big data and artificial intelligence, network effects can raise barriers to entry, enabling big platforms to engage in behaviors such as collusion, tacit collusion, and price discrimination, to the detriment of consumers.

Maurice E. Stucke
Maurice E. Stucke

In the first part of a two-part interview with ProMarket, Stucke—a former antitrust prosecutor at the Department of Justice—and Ezrachi elaborate on the changing dynamics of what they call the “digitized hand” and explain how the market may in fact appear to be more competitive than it really is.

Q: In your recent papers, you argue that the digital economy, which is typically thought of as innovative and highly competitive, is in fact a lot less competitive than we typically assume it to be. Can you explain?

Ariel Ezrachi: The Internet, big data and big analytics, provide us with extremely valuable benefits that often promote a competitive online environment. This is achieved through the increase in number of sellers, the availability of information, improved market transparency, reduced barriers to entry, etc. However, we cannot uncritically assume that we will always benefit. When we critically examine the complex algorithm-driven environment, we can witness imperfections that result in the new market realities being less competitive than one would expect.

In many ways, the new market dynamic might have the characteristics of competition as we know it. But it’s a much more complex environment, in which the invisible hand that we all rely upon has been pushed aside by what we refer to as the “digitized hand.” This hand is controlled by corporations and can be manipulated. It has the capacity to be selective, to generate different levels of competitive pressures on the players, and that results in an environment that operates using different rules from the ones we assume in the theoretical models.

Q: In your upcoming book Virtual Competition, you make the case that big data, algorithms, and artificial intelligence can all be used to potentially harm competition and consumers. How so? 

AE: We identify three main areas of harm–collusion, behavioral discrimination and frenemy dynamic.

The first–collusion–includes both express or tacit collusion through algorithms. As pricing shifts from humans to computers, so too will the types of collusion in which companies may engage. Take for example the possibility that as part of dynamic pricing, smart algorithms with artificial intelligence are used to monitor the market and stabilize price competition. Under certain market conditions, each algorithm can adopt a strategy which fosters interdependence between operators – following price increases by competitors and punishing deviations from the new equilibrium.

Another collusive example concerns the possible use of a single algorithm by numerous competitors to establish a hub-and-spoke alignment of price. To illustrate, consider the use of a single pricing algorithm by Uber and other similar ride providers. To clarify, we have nothing against Uber.

But we use Uber to illustrate how a hub-and-spoke cartel can develop over time. Here you have independent drivers, all of whom rely on a single algorithm to determine the fare. Moreover when Uber’s algorithm decides, perhaps because it’s raining, that there is a lack of supply, it then determines to raise prices for a specific time period and area. The Uber drivers cannot discount from this algorithm-determined price. As Uber’s market power increases, and as more drivers in the market use the same algorithm, you’re likely to witness an alignment of pricing across the industry.

Our second theory of harm concerns behavioral discrimination, which differs from price discrimination in several important respects. The strategy involves firms harvesting our personal data to identify which emotion (or bias) will prompt us to buy a product, and what’s the most we are willing to pay.  Here sellers track us and collect data about us in o
rder to tailor their advertising and marketing to target us at critical moments with the right price and emotional pitch. So behavioral discrimination increases profits by increasing overall consumption (by shifting the demand curve to the right and price discriminating) and reducing consumer surplus.

Our third theory of harm concerns the unique “frenemy” dynamic between the “super-platforms” and independent apps. A relationship of both competition and cooperation exists between the super-platforms and independent apps. One example involves the operating systems for mobile phones. Two super-platforms—Apple’s iOS and Google’s Android mobile software platforms—dominate. Each super-platform, like a coral reef, attracts to its ecosystem software developers, apps, and accessory makers.

One anticompetitive risk is when the frenemies cooperate to extract data from individuals and promote asymmetrical information flows to foster behavioral exploitation, while simultaneously competing among themselves over the consumer surplus. Another risk is when the super-platforms, as the gatekeepers, can exclude or hinder the independent apps. When the super-platform vertically integrates, its incentives can change. It can engage in unfair practices to favor its own app over rival apps. We see these issues currently in Europe, where there are already three Statements of Objections against Google.

Within that dynamic, perhaps the next frontier will be how those super-platforms will actually control the interface. As internet search is changing and with the rise of digital personal assistants, we are distancing ourselves from the junctions of decision-making and basically putting our trust in those platforms.

Q: What role do network effects play in the ability of super-platforms to subvert competition?

Maurice Stucke: They can play a significant role. Some argue that Big Data does not lend itself to entry barriers. Others go even further. They claim that most online markets are notable for their low entry barriers. If one used the traditional factors for assessing entry barriers, one might agree. Many online industries are dynamic and fast-growing. Data-driven mergers often involve free products, where customers seemingly are not locked-in. Consumers could easily switch to other free products or services. Finally launching a competing app may not require a lot of time and investment. And the requisite technology to enter may be standardized.

Take a look at search engines, like Google, Bing, Yahoo!, and DuckDuckGo. They are free and easy to use. Users can easily switch from one search engine to another. Seemingly users are not locked-in by any data portability issues. Moreover, search engines do not display the classic direct network effects that the courts and agencies have identified. So under antitrust’s traditional factors, the entry barriers appear low, obviating the need for antitrust intervention.

In Big Data and Competition Policy, Allen Grunes and I identify four different types of network effects that can be at play in these online markets. We want to be careful here: these network effects are not necessarily bad. They can be actually quite good and benefit consumers with higher quality products and services. But the data-driven network effects also have the potential to raise entry barriers and enable the big firms to become even bigger, until they dominate the industry.

Our point here is that competition authorities in assessing mergers and monopolistic abuses will have an incomplete picture of the market realities if they consider only the traditional entry barriers and traditional network effects. They must be aware of several additional data-driven network effects, which can lead to market concentration and dominance.

Q: In your paper “When Competition Fails to Optimize Quality: A Look at Search Engines,”((Maurice E. Stucke and Ariel Ezrachi, “When Competition Fails to Optimize Quality: A Look at Search Engines,” Yale Journal of Law & Technology 70 (2016).)) you argue that a prominent search engine may have the incentive and ability to degrade the quality of its search results, despite competitive pressure and the damage this does to consumers. How can a search engine maintain its market power while intentionally degrading quality, especially with competitors literally one click away?

MS: That is the issue competition authorities are grappling with: how is it possible that any search engine can intentionally degrade the quality of its search results when you have rival search engines that are just one click away? And to what extent can companies take advantage of these network effects and engage in anti-competitive practices to tip the market to their advantage?

What sparked this paper was Google’s antitrust troubles in Europe. In its first Statement of Objections against Google, the European Commission expressed concern about Google leveraging its market power in the online general search engine market to create an advantage in the related market of comparison-shopping services. Basically Google was favoring in its search results its shopping review sites.  But the debate around the case was largely whether search degradation violates the competition laws. There was far less attention to the underlying issue–whether and how a search engine, faced with rivals, could even degrade quality on the free side. 

So our paper sought to provide an analytical framework for assessing whether a firm can intentionally degrade quality when faced with rivals. We consider three necessary, but not sufficient, variables for quality degradation to occur. The first relates to the search engine’s ability and incentive to intentionally degrade quality on the free side of the market–namely the search results–below levels that most users prefer. 

This, in turn, depends on the degree of several data-driven network effects, and the extent to which the search engine benefits from scale and scope. The second variable is consumers’ ability and incentive to accurately assess quality differences. The third variable concerns the difficulties and costs to convey to consumers the search engines’ inherent quality differences and to switch consumers to a rival. 

With these three variables in mind, we consider instances when a leading search engine can intentionally degrade quality despite competition from rivals.

Q: Why would consumers go along with this?
When it comes to the market for online search, clearly the best product won. If Google or another platform degrades the quality of their service, it is very easy for users to leave.

AE: That is the common argument some companies make in order to discourage antitrust intervention. But, is it really the case that consumers are informed, empowered and able to switch? Consumers can perceive differences in quality between search engines when confronted with side-by-side comparisons in blind tests. But it is not altogether clear that consumers, even with direct “Bing-it-on” quality challenges, act upon quality differences in real life. One reason is the difficulty in perceiving the quality degradation. 

How do I know it when a search engine is intentionally skewing the results? Second, although it is easy to multi-home, not many of us do. If, for example, many consumers stick with the default option for the search engine, then rival search engines will find it difficult to overcome users’ status quo bias. If most users stick with the default search engine, then the search engine that becomes the default option on most entry points for search (such as the Internet browser) gets the most users and attracts the most advertisers. Thus, the likelihood of anticompetitive quality degradation increases when the search engine controls the essential portals to search, including the underlying mobile operating system.

MS: This raises the tension between economic theory and economic realities. Agencies and courts typically rely on neoclassical economic theories to explain their decision and help understand the particular facts of the market under investigation. At times the neoclassical economic theories– premised on rational market participants with willpower who pursue their economic self-interest–cannot easily be reconciled with the economic realities. The agency and court are now in a quandary. Some will discount this divergence as a brief behavioral irregularity. Because the market will soon return to normalcy, the antitrust agency need not worry about the deviation. At times, the divergence is brief. At other times, the anticompetitive realities, contrary to neoclassical economic theory, persist. 

So some argue that under neo-classical economic theory, intentional search degradation cannot occur for any significant period of time. Consumers simply would use another search engine. Our response is that theory should not trump the economic realities. If you have strong evidence of sustained intentional search degradation, then you cannot ignore or minimize this evidence because it conflicts with your economic theory.

Q: How do you explain the gap between the perception of the digital world as fiercely competitive, and the reality that you see?

AE: The new market dynamic, new technologies, and start-ups have captivated our attention and created a welfare mirage—the fantasy of intensified competition. Yet, behind the mirage, there operates an increasingly well-oiled machine that can defy the free competitive forces we rely on. What appears to be a competitive environment may not be the welfare-enhancing competition that we know. New technologies changed the dynamics of competition as we know it and gave rise to a new environment, which may display the characteristics of competitive markets but is driven by different forces.

Think of the 1998 American movie The Truman Show—a controlled environment which is nothing more than a façade, but has the potential to deliver relative joy to its subjects. The main beneficiary, of course, is the one who controls the ecosystem. Likewise, some online markets may appear to be subject to ordinary free market forces. We, like Truman, may think that we’re ordinary consumers with ordinary lives with unremarkable purchases. We have no idea about how, and the extent to which, we are being exploited by the digitalized hand.

Q: In a recent paper,((Ariel Ezrachi and Maurice E. Stucke, “The Rise of Behavioural Discrimination,” working paper (2016).)) you argue that big data and analytics led to a shift from old, imperfect forms of price discrimination to “near perfect” price discrimination. How does this price discrimination fit into the new competitive dynamic that you describe? 

AE: In the online environment, it is easier to track your behavior, gather information about you, and therefore tailor different promotions or pricing to your needs – what is often described as dynamic, differential pricing in the literature. From a competition perspective, we are moving from price discrimination, which at times can be welfare enhancing, to behavioral discrimination.

Online behavioral discrimination, as we explore, will likely differ from the price discrimination we have seen in the brick-and-mortar retail world in several important respects: First is the shift from third-degree, imperfect price discrimination to near perfect price discrimination; second is the overall increase in consumption as the demand curve shifts to the right; and third is the durability of behavioral discrimination. So we explore how online sellers, in tracking us, collecting data about us, and segmenting us into smaller groups can better identify our reservation price. We also explore how sellers can use Big Data to target us with the right emotional pitch to increase overall consumption.

For the customer the online market might seem competitive, where they have multiple options, a lot of sellers, and different products—and yet it is possible when you take the full ecosystem into account the customers are getting a lot less.

MS: Some might argue that this isn’t really any different from the advertising and couponing of the past—commercials to promote a certain brand of cereal and coupons in the newspaper or direct mail to induce you to purchase the product. Our over  As online firms increasingly track you, the better they can personalize pricing and product offerings, and the harder it might become for consumers to discover a general market price and to assess their outside options. Personalization and data-driven network effects can make behavioral discrimination more durable.  So with the rise of behavioral discrimination, we may not be as free as we believe we are. 

Q: One of the examples you explore is the case of price comparison websites. Why?

AE: Price comparison websites provide a nice illustration of all the dynamics together: price comparison websites provide us with a wonderful transparent vehicle to easily identify where we can get a better deal—and in doing so they support a competitive environment. And yet, as pricing becomes dynamic in nature and controlled by algorithms, many products and services exhibit similar price levels. Does the similarity in
price reflect the competitive price or above-competitive price – the result of alignment between algorithms or contractual mechanisms such as wide parity clauses which result in horizontal price alignment?

Q: In the past, you’ve explored the possibility of computers tacitly colluding through AI.((Ariel Ezrachi and Maurice E. Stucke, “Artificial Intelligence & Collusion: When Computers Inhibit Competition,” Oxford Legal Studies Research Paper No. 18 (2015).)) In a more recent paper,((Maurice E. Stucke and Ariel Ezrachi, “Is Your Digital Assistant Devious?” Oxford Legal Studies Research Paper No. 52 (2016).)) you argue that reliance on super-platforms can intellectually capture users, and that the growing use of personal digital “butlers,” such as Apple’s Siri or Amazon’s Alexa, can help entrench that capture. Can you elaborate on what intellectual capture means, in this context?

AE: First, it is important to note that when we speak of the butler, we talk in future terms and consider the next frontier of personal helpers. The future heralds faster, smarter and more human-like versions than our current digitalized butlers which will transform the way we access information and communicate. But as we welcome these intelligent, voice-activated helpers to our homes, we may not recognize their toll on our well-being.

The more we communicate primarily with our personal assistant, the less likely we will independently search the web, read independent customer reviews, use multiple price-comparison websites, and rely on other tools. We will entrust our butler to undertake this effort and report its results.  In relying on our butler, we become less aware of outside options.

One would expect the digital butler, and the platform on which it operates, to become our key gateway to the web. In controlling this interface and accessing our communications and data, the gatekeeper could also abuse its significant market powerengage in data harvesting, behavioral discrimination, and create a distorted view of available options and market reality.

(Note: This is the first of a two-part interview with Maurice Stucke and Ariel Ezrachi. You can read the second part, which explores what regulators and enforcement agencies can do about the new competitive dynamic of the digital world, here.)