Managing the Economic and Social Impact of the Digital Revolution

The four reports that will be presented at the Stigler Center’s Digital Platforms, Markets and Democracy conference offer a compelling case for regulatory action and a very powerful array of solutions.

 

 


Today and tomorrow (May 15-16), the Stigler Center will host its third annual antitrust and competition conference. Titled “Digital Platforms, Markets and Democracy: A Path Forward,” the conference will explore policy responses to the political and economic issues raised by digital platforms. You can follow it through live stream here


 

 

Last year, the Stigler Center organized a conference on digital platforms (DPs). That conference brought together a remarkable combination of economists, lawyers, journalists, venture capitalists, and data scientists.

 

While there was a healthy variety of opinions about the nature of the new problems that digital platforms posed, there was an almost universal consensus that DPs represented such a new phenomenon that they deserved to be studied in greater detail. There was also a nascent consensus that something needed to be done, although there was no consensus at all about what that something was. The main reason for the disagreement was the multidisciplinary nature of the effects of DPs. DPs impact not only the economic sphere. They also impact personal privacy and, if we talk about companies such as Facebook and Google, they impact the media sphere and the political sphere as well.

 

This multidisciplinary nature leads to two different problems. First, it allows each discipline to pass the buck to another, without anyone feeling compelled to act. Second, even if there were a political will to act, any solution elaborated with only one of these perspectives in mind would be doomed to fail because of the interrelatedness of the problems. Facebook’s sole power to curate its newsfeed would be less of an issue if the social media giant had to face a constant competitive pressure from several other independent social media providers. At the same time, the enormous advantage that Facebook and Google have gained vis-à-vis their potential competitors through the accumulation of a trove of data would be less of a problem if there were no privacy concerns in sharing data (for example, through reidentification).      

 

For this reason, the Stigler Center decided to organize a multipronged study, addressing DPs in a holistic manner. Shortly after last year’s conference, we created a committee on digital platforms, which assembled four different subcommittees charged with studying what problems DPs created in their respective spheres of analysis and what arrays of solutions would be possible to use in response. The four areas of inquiry were: economics, privacy, media, and politics. Although all DPs share many common features, in order to allow an in-depth analysis of the issue we decided to limit the study to the two most dominant platforms: Facebook and Google.

 

A significant problem with scholarship around policymaking in many areas—but in particular around Digital Platforms—is the influence (or perceived influence) of corporate money and donations on think thanks, NGOs, and even universities. To preserve not only the independence but also the appearance of independence of the analysis, we recruited people who had not worked, consulted, or acted as expert witnesses for or against Google or Facebook in the previous two years. Furthermore, we accepted funding for this initiative only from the Alfred P. Sloan Foundation (to whom we are very grateful)—another independent and respected third party. The Stigler Center itself is not funded in any way either by Google or by Facebook.      

 

In spite of these limitations (and of the fact that all this effort was not compensated), we were able to assemble four extremely high-level interdisciplinary subcommittees, all led by respected experts in each field: Fiona Scott Morton (Yale University) for the economics subcommittee, Lior Strahilevitz (University of Chicago) for privacy, Guy Rolnik (University of Chicago) for media, and Nolan McCarty (Princeton University) for politics. All the other members of the four subcommittees and their affiliations are listed here—an impressive cohort that spans more than fifteen leading universities in the United States and Europe. 

 

To preserve their freedom of inquiry, these subcommittees were charged with a very broad mandate: i) identify whether DPs represent a new set of problems in your sphere of analysis; ii) describe the nature of these problems; and iii) propose an array of possible solutions. We did not ask the subcommittees to converge on a single proposal, neither across committees nor within each subcommittee. Thus, not all members of each subcommittee agree with all the proposals presented by their own subcommittee, let alone with the ones presented by the other subcommittees.    

 

This tentativeness was deliberate. Converging on a final proposal requires making a series of assessments about the magnitude of the various costs and the effectiveness of the possible solutions. These assessments are always difficult to make, but are particularly difficult when there is an absolute dearth of empirical studies, as is the case for DPs. For this reason, we thought that this process of reconciliation could be done better after a discussion at this year’s conference, which will bring together a broader set of people, including representatives from Google and Facebook. We also think that this unfiltered array of solutions may provide a useful reference point to any other committee or government agency interested in the problem. After all, as scholars we are better able to identify the tradeoffs rather than to solve them.

 

We do not want to pre-empt the debate that will take place at this year’s conference. Thus, in this introduction we will only summarize the key findings of the four reports regarding the concerns.   

 

The first important insight emerging from the reports is the unique challenge DPs are posing, both from an economic and a political point of view. While many of the DPs’ economic features could be found in other markets, they were mostly present in isolation. What is unique to DPs is the simultaneous presence of i) extremely strong network effects; ii) very strong economies of scale and scope generated by the very nature of the digital economy; iii) marginal costs close to zero; iv) extreme complexity and opacity; and (v) a global reach that further reinforces these four points. This unique combination creates an unprecedented tendency towards monopolization. Given this tendency, a general conviction that markets will naturally self-correct can only be based on faith, not on historical precedents, since there are no similar historical precedents.

 

Even in markets that tend towards concentration, the positive effects of competition can be preserved by competition for the market, rather than in the market. Yet, there are several worrying indicia about the degree of intensity of this competition. First, the large and persistent profits that Google and Facebook are enjoying is a cause for concern, especially when compared with the lack of any form of effective entry in this space. When barriers to entry are not prohibitively high, high profits attract a lot of entrants, especially when important technological shifts, such as the rise of smartphones, occur. Why, then, is this pattern not the case with current DPs?

 

Second, in a digital economy there are increasing returns from owning data. Even if the privacy-conscious search engine DuckDuckGo had a crawler and search algorithm as good as Google, it would probably perform searches less well because it cannot benefit from the billions of past searches that helped Google index the results better. In other terms, in digital markets the incumbent’s advantage naturally grows with time. Thus, the fact that in the past Google and Facebook overtook Yahoo and MySpace is not a guarantee that new search engines and social media companies can do the same today. Given these increasing returns to scale, the burden of proving that potential entry disciplines DPs should belong to those defending inaction, rather than the other way around.   

         

Last but not least, well-established behavioral biases cause incumbents to be much more entrenched than would be predicted by rational models with no biases. Competition is not “just a click away,” as Google likes to repeat, because a very high percentage of consumers stick to the default settings and 95 percent do not look past the first page of search results. The best evidence in favor of the importance of defaults is provided by Google itself. Reports indicate that last year Google paid $9bn and this year $12bn to Apple to ensure its search engine remains the default setting in all iPhones. This large amount is evidence both of the size of the barriers to entry imposed by defaults and of the billions in monopoly profits Google is afraid to lose if those barriers were removed.     

 

The technological tendency towards concentration and the high barriers to entry fundamentally alter the terms of the historical tradeoff between the costs of intervention and costs of inaction. In markets easily subjected to tipping, inaction could lead to entrenched monopolies in a way that might not be reversible. This risk is exacerbated by the important findings of the Political Committee: DPs are not only unique from an economic point of view, but they are unique from a political point of view as well. Political scientists distinguish several sources of power: i) money; ii) media power and ability to claim First Amendment exemptions; iii) ability to mobilize voters; iv) complexity that shields from public scrutiny; and v) the ability to project your interest as an essential national interest (which in the DPs case comes from the identification as “national champions”). DPs are unique because they are able to concentrate all these five sources in their hands.

 

“The technological tendency towards concentration and the high barriers to entry fundamentally alter the terms of the historical tradeoff between the costs of intervention and costs of inaction.”

 

Many commentators have blamed the financial crisis on the ability of financial institutions to capture regulators and legislators. Financial institutions have the power of money and complexity, but they certainly do not have the power of direct mobilization; they cannot directly shape public discourses like a media company; and they cannot claim First Amendment protections nor use the “national interest” narrative. If financial institutions—with such limited power—could create such great havoc, imagine what DPs can do. While dangerous per se, this unprecedented political power is especially dangerous when it is matched with a strong market power: It makes any intervention almost politically unfeasible. Hence, the cost of inaction is dramatically increasing over time: The more these companies grow in size, importance, and complexity, the higher the cost to fix mistakes later on. 

 

Both the Privacy and the Media subcommittees identify the DPs’ market power as the main source of distortions produced in their respective areas of analysis. In both cases, however, these distortions are exacerbated by other factors. In a competitive market, platforms compete not just on price but also on quality. Privacy is a form of quality. When Facebook used to compete with MySpace, it was marketing itself as the more privacy-conscious alternative. It was only when MySpace went out of business that Facebook started to insert the most intrusive cookies. Yet, even in a perfectly competitive world, it is not obvious that privacy choices would be made according to customers’ desires. First of all, DPs do not internalize all the harms associated with privacy and security breaches. Second, they do not have adequate incentives to consider how their choices affect the interests of consumers who are not their customers, but who are affected by a security breach (the leak of a person’s DNA reveals a lot about his siblings’ DNA, even if the siblings never took a test).

 

Last but not least, the Privacy Report presents new evidence on the perverse effects of the so-called “dark patterns” (manipulation of a user interface with the purpose of “obscuring, subverting, or impairing user autonomy, decision-making, or choice to obtain consent or user data”). It shows how an 11 percent acceptance rate of a plan by consumers can easily be turned into a 26 percent or 42 percent acceptance by introducing increasingly strong dark patterns. It also shows that the most likely to be manipulated successfully are the least educated Americans—the most vulnerable populations to begin with.

 

The Media subcommittee comes to a similar conclusion. Facebook and YouTube exert editorial choices, not dissimilar from those made by the newspapers of yesteryear. Yet, there are three major differences. First, two algorithms designed by a few people affect simultaneously more than 2 billion people, without much of an alternative. Thus, the amount of power granted to a few engineers is unprecedented. Second, as Tristan Harris pointed out at last year’s conference, these engineers used the most sophisticated psychological techniques to make the consumers as addicted to the platform as possible. It reminds us of the tobacco companies that were using science to design the most addictive cigarettes. Last but not least, thanks to section 230 of the 1996 Telecommunications Act, DPs do not face any potential liability associated with their editorial choice. Thus, Facebook faces no liability whatsoever for pushing on Myanmar’s people the anti-Rohingya propaganda created by Myanmar military leaders, a process that an independent UN fact-finding mission concluded “substantively contributed” to a genocide that led to an estimated 10,000 deaths and the forced migration of 700,000 people.

 

We are well aware that regulating digital platforms is an extremely complex problem, yet it is a problem that cannot be postponed. Even Facebook itself is calling for some form of regulation. The time has come to elaborate thoughtful and evidence-based solutions.

 

In this spirit, the four reports present not only a compelling case for action, but also a very powerful array of solutions. Yet, we do not want to spoil the conference discussion, which you can follow here through live stream. At the conference, we are bringing together not only the four subcommittee members but also more than 130 highly qualified academics, policymakers, civil-society representatives, and journalists to discuss both our reports and a wide range of initiatives that are taking place around the world. We will debate similar studies carried out by different Commissions and the work of think tanks and academic centers—all focused on finding specific solutions to these daunting problems.

 

The stakes could not be higher. At the beginning of the 20th century, the Progressive Era’s choices on how to manage the economic and social impact of the Third Industrial Revolution defined the shape of capitalism, not only in the United States but in the entire world, for the following 100 years. We are now faced with an equally daunting task: to manage the economic and social impact of the Digital Revolution. One hundred years from now, our progenies will look back at this moment in history as the pivotal one where capitalism and democracy were either saved or lost. The responsibility is ours. It is not a small or easy task. But if the extraordinary set of people gathered at this conference cannot start drawing the path forward, who will? 

 

 

For more on antitrust and digital platforms, check out the following episode of the Capitalisn’t podcast:

 

 

The ProMarket blog is dedicated to discussing how competition tends to be subverted by special interests. The posts represent the opinions of their writers, not necessarily those of the University of Chicago, the Booth School of Business, or its faculty. For more information, please visit ProMarket Blog Policy