Traditional accounts of the growing power of large firms implicate antitrust or political corruption. But in a recent book, economist James Bessen puts the blame on the way software has changed competition. By transforming how firms manipulate information, software has made it much harder for new competitors to displace incumbents.
This article is adapted from The New Goliaths, Yale University Press 2022.
The Economist called Clayton Christensen, who died in early 2020, “the most influential management thinker of his time.” Christensen’s seminal work is The Innovator’s Dilemma, published in 1997. In this book, he argues that good managers at leading firms can be “disrupted” by new technologies that they don’t see initially as a serious threat. By focusing on the needs of their most profitable customers, leading companies can miss new technologies that are initially cheap and inferior but that become better over time until firms using these technologies supplant the leaders. This concept of disruption and the belief that the pace of disruption is dramatically increasing have become core tenets of thinking about technology, especially among businesspeople. Yet a strange thing happened shortly after The Innovator’s Dilemma was published: the rate of disruption of market leaders declined sharply. Today the likelihood that a top firm in any industry will be displaced by a rival is less than half of what it was in the late 1990s.
The disruptive economy of the ‘90s has been replaced by one where large, incumbent firms dominate—but why? One prominent line of thought argues that competition has declined because antitrust enforcement has been weakened. Another attributes the power of incumbents to “network effects” associated with digital platforms. Other theories implicate low interest rates, the China trade “shock,” and the rising influence of money and lobbying in U.S. politics.
But as I explain in my book, none of these accounts can explain the decline of disruption and its timing. For example, the rate at which dominant firms have acquired other companies has actually declined since the late 1990s, which suggests the decline of disruption starting around 2000 was not due to changing standards for merger review.
Instead, it is best explained by large firms’ investment in proprietary software and the unique form of competition that it allows. This is not just about economies of scale: Software is allowing large firms to combine the advantages of scale with the advantages of mass customization in a way that was previously impossible. These firms don’t just produce more, cheaper like the goliaths of old. They use software to offer more features, more variety, and more customization. They compete on complexity.
Competing on Complexity
One of the major achievements of economic analysis since World War II is the understanding that information—what economic actors know and what they don’t—has profound effects on the workings of the economy. Information is understood to play a role in why markets fail. It affects the boundaries and organizations of firms, how contracts are made, how institutions are designed to provide the best incentives, and how regulation can be designed. A casual look at the list of economics Nobel laureates shows that it is dominated by people who have contributed to the economics of information.
This research agenda began with a critique of socialist economic planning initially by Ludwig von Mises and later by Friedrich Hayek. Socialists were advocating central planning boards that would control the economy by allocating resources to production and distribution. Mises and then Friedrich Hayek, critiquing socialist economic planning, argued that prices were needed to run an economy. In a seminal paper, “The Use of Knowledge in Society,” published in 1945, Hayek posed the question of how to organize an economy, arguing that a critical problem was information or knowledge:
“The peculiar character of the problem of a rational economic order is determined precisely by the fact that the knowledge of the circumstances of which we must make use never exists in concentrated or integrated form, but solely as the dispersed bits of incomplete and frequently contradictory knowledge which all the separate individuals possess. The economic problem of society … is a problem of how to secure the best use of resources known to any of the members of society, for ends whose relative importance only these individuals know. Or, to put it briefly, it is a problem of the utilization of knowledge not given to anyone in its totality.”
Moreover, the challenge was not just obtaining the knowledge, but also inducing disparate economic agents to act in the socially optimal way. Socialist central planning might promise a fairer economy, but it would fail because planners could neither know the disparate and changing needs of a complex economy nor could they effectively guide disparate economic actors to address those needs.
Other economists further developed this notion and extended the analysis of information and knowledge into other areas. There are two closely-related problems of economic knowledge related to Hayek’s question. First, much of this knowledge is broadly dispersed, it is highly local, and changes frequently, making it difficult or impossible for a central planner to acquire, for a single mind to know. Second, many economic actors won’t truthfully reveal their knowledge to planners or to other economic actors. For instance, a manufacturer has information about the quality of his product but he may be reluctant to reveal that information to prospective customers, especially if the product’s quality is not good. This is known as private or asymmetric information (one party has more relevant information than the other) and it affects a wide range of economic interaction.
Modern computing has transformed the way firms use information, and it’s this shift that explains why large incumbent firms have become so dominant.
For example, the use of information (through software) explains a lot about the rise of chain stores and the subsequent rise of Walmart. Retailers seek to acquire the best quality goods from manufacturers at the best price. But they lack information about the quality of different manufacturers’ offerings. Manufacturers might have different reputations, but retailers basically have to learn the quality of goods through experience. They have to stock them, see how they sell, see how customers like the goods or if they return them. This learning is costly and time-consuming, especially when many new products are becoming available all the time. As a result, an independent retailer has to choose between offering a limited selection or stocking goods of uncertain quality.
Chain stores realize economies of scale through standardization, in addition to other economies, such as those affecting distribution costs. This means that the chain stores can provide an assured level of quality and offer a somewhat greater selection of goods. Of course, this centralization of decision-making also has a downside—the chain ignores much local information, it has a limited ability to tailor offerings to different areas that might have varying tastes and it is slow to respond to local changes. The chain store model trades the ability to adapt to local information for a reduction in the costs of obtaining information about product quality. Chain stores and other large enterprises introduce an element of central planning into capitalism and by doing so, they suffer from Hayek’s problem of local information known to dispersed individuals.
The Walmart business model also saves learning costs by sharing information about product quality between outlets but, because it decentralizes decision-making—thanks to information tech—it can adapt better to local information. By decentralizing critical information, Walmart permits store managers to adjust to local demand under the watchful monitoring of central headquarters, and it handles the complex distribution of varied goods to different stores. At the same time, it facilitates sharing of information between stores about new products and changing demand. The technology also manages to distribute the changing goods to the stores quickly and efficiently. These advantages mean that Walmart stores can cost-effectively offer far greater selection in its stores, providing the quality products that consumers want.
Information technology is able to adapt to local information because it permits basic economies of scope. Well-designed software is modular; code to handle an additional product or an additional feature can be developed largely independently at relatively low cost compared to physical systems. With these scope economies, information technology breaks the deadlock between local information needs and the cost savings that come with standardization. This change to the economics of information fundamentally alters the economic order.
Walmart is not an isolated example. Other firms in other industries are using information technology to adapt to individual or local information. These capabilities allow firms to overcome the Hayekian dilemma, allowing them to address widely disparate needs and demands. In industries where firms differentiate on quality, those firms that are best able to manage complexity become dominant; other firms offer lower quality, fewer features, or less variety and remain smaller.
Of course, not all industries have these properties and not all industries make the corresponding large investments in developing their own software. We can gain a rough sense of the size of this phenomenon by classifying industries according to their employment of software developers. Using the 2012 Economic Census, if we exclude industries where software is a major part of the product and count those industries that employ over five thousand software developers or those industries where software developers comprise over 2 percent of the workforce, then 45% of industries are software intensive. But these industries tend to be large, accounting for 71% of revenues. Software-intensive industries comprise a very sizable chunk of the economy.
Reviving Disruption
The decline in disruption is not an inherent outcome of the new technologies of mass customization any more than the dominance of U.S. Steel at the beginning of the twentieth century was an inevitable result of scale economies in steel production. Rather, large firm dominance in both cases arose from specific policies and firm choices.
The central challenge for policymakers is to encourage the unbundling of these proprietary systems—dominant firms need to be discouraged from hoarding the code, data, and organizational capabilities that allow them to manage complexity. Antitrust authorities and courts have long used compulsory licensing of patents as a remedy for anticompetitive conduct, as a condition for a merger or acquisition, or where there is a pressing public need, such as the under-supply of a needed vaccine. The same approach could be applied to proprietary software systems, licensing the code and associated intellectual property, possibly under an open-source license.
In addition, orders and consent decrees could place key data in the public domain. In other cases, courts or antitrust regulators could order unbundling and the creation of an open API. Policymakers also need to preserve open standards for technology and encourage worker mobility since it allows technology skills to diffuse between firms.
A sustainable information economy involves more than a large share of people working with information; it is also one in which new knowledge is actively developed and widely shared. One can imagine an economic order populated by big firms and small, large platform companies and innovative small companies. To get there, we need policies that encourage open access to technology, even if that sometimes comes at the expense of some proprietary control.
For more from James Bessen on his new book, check out a recent episode of the Capitalisn’t podcast: