In an interview with ProMarket, Francis Fukuyama discusses the political threat posed by digital platforms and why he believes a “middleware” solution would be superior to antitrust remedies.
The economic and political power of digital platforms has been the subject of numerous governmental and non-governmental reports in recent years, as dozens of expert panels around the world (including the Stigler Center) attempted to find solutions to the threat that tech giants pose to competition and to liberal democracies.
The latest to join this growing list is a report by the Working Group on Platform Scale, an initiative of the Program on Democracy and the Internet at Stanford University, led by Francis Fukuyama. The report, which was published last month, is primarily concerned with the political harms to society posed by the major tech platforms, particularly threats to democratic discourse and choice. “The growing political power exercised by the digital platforms is like a loaded weapon sitting on the table in front of us,” the authors warn.
While many reports have proposed antitrust remedies to some of these harms, the Stanford report advocates for a “middleware” solution, in which platforms open their APIs to third-party companies that would offer users ways to choose how information on the platforms is filtered for them. It also advocates for the creation of a digital agency with the power to oversee these markets.
In order to learn more about the report’s views and proposed solutions, we recently interviewed Fukuyama, a Senior Fellow at Stanford University’s Freeman Spogli Institute for International Studies and one of the most influential political scientists of the past 30 years. The following conversation has been edited for length and clarity:
Q: Your career has been mostly focused on theorizing and defending the virtues of liberal democracy, and the report focuses on the threat that the political power of digital platforms poses for liberal democracies. What is the nature of this threat?
I think that democratic countries have always seen concentrated private economic and political power as potentially very threatening to democracy. This was one of the motives in the passage of the original Sherman Act in the late 19th century.
Since that time, a lot of antitrust law has focused on the economic consequences, but because of the size and nature of the digital platforms, there’s also a very significant political threat. These platforms control a great deal of the political speech that happens in the United States. They can amplify certain messages; they can suppress others; they can subtly guide people to certain views.
This wouldn’t be a problem if they weren’t so large. But when one private organization gets to be of a scale where it can sway a very large number of citizens in political matters, that is something that should be very concerning for anybody that’s interested in the health of modern democracy.
Q: In the report, you refer to the growing power of digital platforms as a “loaded weapon sitting on the table in front of us,” but the report does not name any companies. Can you name the platforms you are focused on in particular?
Facebook, Google, and Twitter. Obviously, Twitter is smaller in reach, but in a way, it’s much more political.
Q: Arguably, the weapon has been utilized in some capacity during the recent US election. There have been media reports that Facebook has been lenient or favorable to misinformation spread by organizations, groups, and individuals supporting Donald Trump because Facebook viewed his administration as more favorable in terms of regulation.
That’s right. There’s a mixed record on this. Facebook has permitted this, but they’ve also flagged or taken down comments that are unfavorable to Trump. Twitter has been more active than Facebook in this regard. I don’t think Google has been doing much at all.
The point is that many civil society groups and activists have been trying to pressure these platforms to essentially get rid of conspiracy theories and a lot of pro-Trump material. That’s where the loaded gun metaphor is important: Just because you like the decisions that Jack Dorsey is making and Twitter flagging, say, President Trump’s tweets on the “stolen” election, doesn’t mean that this is a good long-term solution for our democracy. Maybe Jack Dorsey is doing something today that you approve of, but maybe tomorrow Twitter will be bought by a conservative media empire and will start flagging things in the opposite direction.
In a democracy, we don’t rely on the good intentions of powerful private individuals to do the right thing—we want institutional safeguards that make it harder for a bad-intentioned person to really do damage to our democracy.
Q: The report seems skeptical about the ability of increased antitrust enforcement to offset the political problem created by digital platforms. Why?
Antitrust law, as it’s developed, has really focused on one issue above all: consumer welfare. This is a legacy of Robert Bork and the Chicago School.
Obviously, if you could use antitrust law to break Facebook and Google up into 12 different companies, that would solve our problem. But I think that realistically, that’s just not on the table right now. Politically and legally, it would be much too difficult.
And the kinds of remedies that are being sought for various abuses of the platforms—like being both a platform and a competitor, or exclusionary conduct, or buying startups—if you think about what political impacts these would have, they’re pretty marginal to solving the kinds of problems that we are concerned with, which is essentially the platforms’ ability to be the main locus of public discussion of political issues.
We think that abuses in the economic realm should be looked at by antitrust authorities, but for our main concern, which is the threat to democracy, we just don’t think that any [such remedies] are going to be very helpful.
Q: The report stresses that these firms are a potential threat to democracy and that their economic and political power are connected, yet you oppose expanding the goals of antitrust laws to incorporate this connection, in part because this would risk undermining the “policy coherence” of antitrust. That seems like a bit of a disconnect: Why do you think policy coherence so important in this area?
There is this so-called New Brandeisian school of antitrust that argues that the Sherman Act did envision action for political goals as well as for purely economic ones. In terms of the legal scholarship, you probably could justify that. As a practical matter, I think it could be fairly dangerous. One of the political goals that early antitrust enforcers set, for example, is to protect small producers against large ones. This is a case where I actually think Robert Bork was right: if you put a court in the middle of a dispute between an efficient large producer and an inefficient small producer, you’re basically asking the court to allocate a consumer surplus to one or the other with no obvious principle reason.
But the bigger problem, I think, is what’s been labeled “gangster antitrust,” wherein you start using antitrust to go after certain political objectives. There’s really no guarantee that they will be used for ones that we deem to be pro-democracy. The Trump administration has used antitrust to block certain things that it didn’t like—for example, the agreement of certain auto manufacturers to use California emission standards or trying to block AT&T’s acquisition of Time Warner. These may or may not be justified on antitrust grounds, but it would appear that they were probably motivated by basically the Trump administration wanting to get back at certain political enemies or perceived enemies. And this is an abuse that could happen in the future if antitrust really is broadened to encompass poorly-defined political objectives.
Q: So it’s less a question of principles and more a question of whether this method works.
That’s right. Ultimately, if you were to use antitrust in this political fashion, it gets at the same problems that I had referred to earlier, that it depends on trusting the antitrust enforcer to use them for what we regard as good reasons rather than anti-democratic ones. And that’s going to be very hard to guarantee in the future, given polarization and everything else, which is why I think competition and a spreading out of that power is a better approach.
Q: The report proposes what you call the middleware solution. Can you explain what that is?
Middleware would be a piece of software that sits on top of the existing platforms and provides a different way of accessing the material on the platforms. There’s a variety of ways this can be done, but the basic concept is to have the platforms outsource their current efforts to curate the material that they present, especially political material, to a competitive layer of middleware companies that could tailor their curation to the individual preferences of users.
This would give control back to the users as to what they actually see. Right now, if you look at your YouTube feed, you’re given a bunch of videos by some AI program based on your prior clicks, and likes, and so forth, but you actually have no idea how that program is making those decisions. What we’d like to see is the ability to pick a different middleware company that would actually allow you to shape those decisions.
In our view, the point of middleware is not to block bad material—the point is to increase diversity and choice and to give individual users much more control over what it is that they see and hear.
Q: The report outlines different versions of middleware, from a mild version that’s more like outsourcing the tagging of tweets to one where the middleware can rearrange the results and show content that wouldn’t be shown otherwise. There have been companies, like Power Ventures, that offered some sort of middleware solution before, and platforms like Facebook sued them out of existence. What is your preferred solution to this problem, and how much opposition do you expect from platforms?
We don’t actually have a preferred solution. You’ve accurately defined the two extremes. At one extreme, the middleware could actually be your gateway to the platform’s content. The platform itself would be just like a dumb pipe that simply served up information, and the middleware company actually presented the user interface, and rankings, and so forth. I think that’s correct that the companies would resist this tooth and nail because it would undermine their entire business model.
The lightest version of it would be essentially some version of what Twitter does today, where they serve up a Twitter feed, and then they just tag things as “this is disputed.”
The first one is politically very difficult to do. The latter may not be sufficiently interventionist. We do think that there are probably some intermediate solutions.
It is true that there have been companies that have tried to do this. I think that if it’s just left up to the market, it’s not going to happen. The incentives aren’t there. Although we probably think that people would pay for this kind of service, they wouldn’t pay enough that it would make it a viable proposition for a lot of companies.
So I do think you’d have to have regulatory intervention, first of all to require the platforms to open up their APIs so middleware companies could offer meaningful products. Then the more difficult one: there would have to be some form of revenue sharing where the middleware companies would have to get some of the advertising revenue to make this a business proposition that they would want to get into. I don’t think that can really be done without statutory intervention.
Q: Platforms have fought previous attempts to change the results/content they display on First Amendment grounds. Do you envision any First Amendment restrictions on this solution?
I think that this is actually more compatible with the First Amendment than the current situation is, because essentially we’re saying that a diversity of middleware companies will provide more choice and more competition, which is what the First Amendment seeks to do.
Many people that heard about this idea have told us the opposite: isn’t this just going to reinforce existing filter bubbles? The QAnon people will have a QAnon filter that will point them to conspiracy theories and so forth. I think the answer is yes—it is likely. Our objective should not be to stamp out every conspiracy theory that we find objectionable. The spirit of the First Amendment is that people have a right to say and hear whatever they want in a marketplace of ideas. What we don’t want is the ability to amplify, conceal, or restrict what people can say, and that’s really the ability that the current platforms have. They can suppress Alex Jones, they can amplify another kind of conspiracy theory.
Under our solution, you will have the conspiracy theories, but hopefully, they will be restricted to a very small dark corner of the internet and not get broadcast to tens of millions of people.
Q: One criticism you have of the idea of breaking up the platforms is that it would take too long and cases will be litigated for years. It seems like the middleware solution will also take years: construct an entire industry of intermediaries, figure out how to regulate it, how to push the right incentives, and to have viable companies.
That’s right. We have only scratched the surface of the work that needs to be done. What we hope to do in the coming months is to flesh this proposal out and come up with some prototypes both of the business model and the technical model for how you would open up the platform APIs.
One thing that gives us a little bit of hope is that the platforms themselves may actually like some version of this. Dick Costolo, who was one of the founders of Twitter, was on our opening panel, and he said that from Twitter’s standpoint this would be a godsend, because right now they’re put in this very uncomfortable position where they have to make these sensitive political decisions about what’s acceptable speech and what’s not. Once they make that decision, they piss off half the country regardless of what decision they make, and if somebody else could be responsible for it, they actually may support some version of that. The biggest obstacle to a full-scale antitrust structural remedy is that the platforms are going to contest it like hell. Our model, maybe not so much.
Q: The report begins by describing a very acute immediate threat to liberal democracy, but the alternative that you advocate for would take years to formulate and implement. Is there some dissonance here in terms of urgency?
Self-regulation, or the platforms just deciding curation standards on their own, is going to have to do for the time being. In our view, that’s not a long-term adequate solution in a democracy. The reason that this has been such a neuralgic issue is because of the election that we just had on November 3, and we’re not going to have a big national election for another couple of years, and not another presidential one in four. It does seem to me that in two years’ time, if this idea takes on, you could see movement in that direction, and in four years, maybe you could actually have something like this begin to fall in place. So I’m not sure that it’s completely outside the scale of any kind of solution that will do some good in our immediate situation. Structural breakup, that’s like a generation away, if that were ever to happen, which I really don’t think it will.
Q: The final section of the report defends the idea of a specialized regulatory agency. How do you see the role of this agency?
Many of the groups that had looked at these antitrust issues, including the Stigler Center, have recommended or suggested the need for a new digital agency because there are a lot of issues that are really beyond the capacity of the current Justice Department and FTC to investigate. What’s the likelihood that the purchase of a startup is going to suppress competition in the future? That’s really a very difficult empirical issue to confront.
In the case of the middleware solution, you’d need a specialized capacity in a couple of respects: you’d need to define the technical parameters of opening up these APIs and what the platforms would need to do in order to meet the requirements. And then you’d need to make this very difficult set of economic decisions about how you’re going to fund middleware companies, and whether there are revenue sharing models that will allow sufficient incentive for entrepreneurs to create this kind of company. That’s really not a capacity that any existing federal agency has right now.
Q: The latest agency we created was the CFPB, which had an underwhelming track record and has been quickly debilitated by the Trump administration. How do you see this new agency compared to that?
The big problem with new agency creation is regulatory capture. That’s been the perpetual bane of any effort to regulate anything in the United States. And so you need to have this established by an administration and Congress that are fully aware of the dangers of politicization and the need for regulatory independence and expertise. Had Trump won the election, we would have abandoned this solution, because it just wouldn’t work under a second Trump term, or it might have worked in a way that none of us would want—a completely politicized agency that simply did the bidding of the executive branch. But I do think with the Biden administration, since they are skeptical about the power of Silicon Valley, we may have a little bit of a better chance of establishing a properly-independent agency.
For more on this, listen to Luigi Zingales and Bethany McLean interview Fukuyama in this week’s episode of Capitalisn’t: