Conflicts of interest are a serious problem in scholarship. Transparency and discounting, while necessary, are insufficient to protect the marketplace of ideas. Why?  Founder effects and dilution of expertise, explain Maurice E. Stucke and R. Alexander Bentley. To protect the integrity of academia, we must also encourage the injection and consideration of new and contradictory unconflicted ideas.


We agree with United States Assistant Attorney General Jonathan Kanter that the industry capture of academia blurs the line between expertise and advocacy, which can compromise expert opinions and erode public trust across fields—in antitrust law, Federal Trade Commission (FTC) and Department of Justice (DOJ) actions, economic policy, and more broadly. As biased research proliferates, it can overshadow objective studies, fostering a race to the bottom where courts and regulators face difficulties distinguishing credible scholarship from industry-backed opinions. To mitigate this, John Barrios et al. in their 2024 working paper, The Conflict-of-Interest Discount in the Marketplace of Ideas, recommend discounting research with a conflict of interest (CoI) metric.

We see the logic in this metric, but its assumptions could use a re-think. One of the central parameters in the metric is the expected frequency of conflicted studies, ranging from 0 to 100%. As Zingales et al. state, “The higher the frequency of conflicted papers, the more heavily discounted each individual conflicted paper becomes, compounding the overall loss of credibility in the field.”

This makes sense in principle, but how do we estimate the expected fraction of conflicted studies? In one study referenced by Barrios et al., the frequency of conflicted studies, 0.6, simply reflects 60% of authors on studies of monetary policy being affiliated with a central bank. The CoI might be direct, as in favoring conclusions that validate the bank’s actions, or indirect, in that the experts and their peers appointed by the bank already believed in policy interventions, as opposed to say, their policy-critical colleagues who took jobs in academia. Either way, the bias of a determined minority can lead the majority in the wrong direction. We’ve all observed this as “siloing” in media, academic niches, and politics. Bias and CoI are self-reinforcing, becoming entrenched within organizations and research communities.

But beneath each silo lies a deeper, temporal dimension. Bias in the marketplace of ideas is not only shaped by present influences but also by the ways ideas evolve over time. Ideally, research and policy progress by selecting and refining the best ideas, building on past knowledge—a process often likened to “standing on the shoulders of giants.” While this suggests cumulative evolution, it also includes random deviations that can compound over generations. Thus, besides the frequency of conflicted papers, one must also consider how founder effects and the dilution of expertise can increase the harm from conflicted, biased papers.

Underneath the Silos

A founder effect occurs when an initial idea or influential work disproportionately shapes a scholarly field’s development, causing subsequent research to focus on a narrow subset of the original concepts and limiting the exploration of alternative perspectives.

Legal and technical expertise, science, and public policy are subject to founder effects because they operate on precedent. Precedent can resemble the game of “telephone,” where a message changes as it’s passed along, often losing important details and retaining only attention-grabbing elements. Social psychology experiments, known as transmission chains, show how this kind of message distortion occurs. Mistakes and biases can become embedded as the message is transmitted, evolving into something that is a product of both the original and its transmission process. In every run of telephone, details of the original are winnowed out, but what gets lost is unique to each run.

Through this founder effect, the message becomes a skeleton of the rich story that it began with. Each founder effect is unique, so a second transmission chain would produce a different simplification of the original, retaining a sub-sample of details. The Origin of Species, for example, is often distilled by different groups into a few buzzwords, with each group selecting its own set to reflect its particular view of evolution. This process filters out the full theory as well as its context—evolution is attributed to Charles Darwin, even though Alfred Wallace and others were developing the same ideas in 1858.

We can see founder effects in academic research when seminal papers shape subsequent research. Early followers cite the disruptive paper, and early-adopter studies build on those, and more studies build on those early adopters. Imagine a layered network, in which each “generation” (layer) of research draws from the previous generation. The result of this branching, path-dependent growth of the literature is a narrower set of ideas that represent a fraction of the ideas in the seminal paper—a founder effect.

In several case studies, we observed these founder effects by tracking both topics (determined by word frequencies) and citation statistics of an academic paradigm that traces back to a highly cited seminal work. In each case, the topics of the seminal paper became peripheral to the subsequent publications within the paradigm. There was no conflict of interest. Some topics were rather technical or abstract. But one sees the cascading of variable samples of the original paper. Early adopter papers selected different subsets of ideas and references from the seminal paper, and the generations that followed them branched off into different subgenres. As part of this branching process, the seminal paper is left behind and can appear quite different from the perspective of its descendant niche.

In theory, every scholar within that niche might have no conflict of interest—in terms of career incentives in working for a central bank, for example—and yet the bias is 100% because the branch is founded on a small, peripheral yet heavily biased portion of the original paradigm.

The Implications of Founder Effects on Bias

Founder effects will be significant in fields that rely heavily on proprietary data, including empirical economics. The entity that controls the data can influence the innovation and research paths. The entity grants limited access to its proprietary data to researchers with kindred interests, whose original, yet heavily biased, studies thereafter shape entire branches of research through founder effects. 

Unaware of the bias in the original source, successive generations of research may continue in the biased direction set by the original paper. Furthermore, as research layers build up, citations tend to cite the most recent papers, neglecting the original work, and the biases in the original work are no longer scrutinized and discounted. So, even though readers might have discounted the original source when notified of the conflict, they won’t have this opportunity when they review the later works. Nor will readers likely have the time and inclination to peel back the layers of scholarship to arrive at the original biased source.

In law, initial biases in legal interpretations have permeated layers of case law, often cited indirectly by courts with no reference to the original, controversial source. Founder effects can exist in law because courts regularly cite earlier legal decisions when extracting subsets of legal principles from the prior case law. Before 1979, for example, the lower courts steadfastly refused “to heed arguments premised solely on the theory of consumer welfare.” That changed in 1979 when the U.S. Supreme Court first mentioned in Reiter v. Sonotone Corporation that the federal antitrust laws were a “consumer welfare prescription.” The Court derived this not from the federal antitrust laws themselves, but from Robert H. Bork’s 1978 book, The Antitrust Paradox. Even as scholars pointed out the flaws in Bork’s ideological concept of consumer welfare, over 400 court decisions thereafter cited antitrust’s consumer welfare standard—but only 16 cases cited Bork’s Antitrust Paradox as the source. Whereas antitrust scholars steeped in antitrust history might know the original source and its flaws, other scholars unlikely would.

So, bias coupled with founder effects can have several adverse effects.

First, the scholars in that field can propagate the biased paper, without the degree of skepticism given the source’s conflict of interest. Rather than being as skeptical as scholars in other fields or the public, the economic experts in their own field, as Barrios et al. found, were less skeptical of a colleague’s conflict of interest.

Second, courts rely on these first-line scholars in the field to be skeptical to weed out (or at least appropriately discount) the bias in the research. Under the Frye evidentiary legal standard for admitting expert testimony, which some jurisdictions, like New York, employ, the court considers whether the technique or theory has been generally accepted by scholars in that field. Thus, the immediate scientific community serves as the primary gatekeeper in assessing the novel theory or insight. Even in other jurisdictions that rely on the multi-factor Daubert test, the immediate scientific community plays an important (but not determinative) role in assessing the novel theory.

Third,by the time the biased scholarship ripples to other fields, the scholars in these other fields will not be able to easily identify and appropriately discount the bias in the original paper.  Instead, by then, that source paper will be many layers removed. So, in effect, bias can infect the research of subsequent unbiased scholars.

The dilution of expertise

Given founder effects, it is not feasible to peel back the layers to get to the source to assess its biases. This is especially prominent when the branching process leads to explosive growth in publications and an overload of information, which further fragments the field and obscures key insights.

But even for unconflicted, original works, the nuances of the original idea, including the fruits and insights, can be lost. This is what we call the Dilution of Expertise.Dilution of expertise occurs as an original, innovative idea becomes over-simplified as it is copied over time.For popular ideas, the legions of copiers overwhelm the limited supply of experts who invented them. The supply of fresh, expert input is diluted, like replenishing a pitcher of lemonade with water until the tartness is lost. 

Dilution of expertise follows a characteristic pattern. A small community of knowledgeable, often passionate, experts develop a new paradigm or innovation. In some instances, their seminal idea is copied and recopied, resulting in an exponential increase in the number of copies.  But when the number of copiers dwarfs the original supply of subject experts, any slight variations on the formula become lost in the complexity, and as the product space is overwhelmed, the ensuing ideas become a shadow of the original.

One example of the dilution of expertise involves the Atari 2600 games, which some of you might recall, boomed in popularity and sales in the early 1980s. Each exciting Atari game, like Pitfall or Combat, was sold as a physical cartridge that contained kilobytes of code—less than a typical formatted email today. As the formulaic code was copied and lightly modified by new sellers getting in on the gaming boom, the lexical diversity (a measure) of the computer game codes decreased. This market saturation resulted in the crash of 1983. By 1985, the revenue for all home video games had crashed. It’s a general pattern that can be measured in other text-based products that boomed in the last decade, including Reddit posts and cryptocurrencies.

Barrios et al. use generative pre-trained transformer (GPT) models as an alternative benchmark for their results, on the assumption that GPT models can act as rational economic agents. Unfortunately, AI is also susceptible to this dilution of expertise.  This is already being seen in language learning models and chatbots, as AI quickly produces gibberish when trained on AI-generated content. As more garbage flows in, more garbage results flow out. Using ChatGPT for discourse and qualitative ideas often yields biased results, while using ChatGPT to write computer code can be phenomenally efficient. The training data are of low quality in the first example and highly vetted and robust in the second. Directors of programming language libraries now are insisting that the code be double-checked by a person.

The implications of the dilution of expertise on bias

In a classic social psychology experiment on conformity by Solomon Asch, individuals were shown four lines and asked to pick which of the three lines was of the same length as the fourth line. When asked independently, nearly all participants gave the correct response. But when three to five confederates to the study were asked first and deliberately gave the wrong answer, many participants conformed to the flawed answer. Indeed, some of us might have experienced this at a conference, where unbeknownst to us, a dominant firm funded the event and the research of the other speakers. Here the biased speakers can pressure the original speaker to conform. Even if that fails, the biased speakers can influence the scholars, enforcers, and judges in the audience. 

Conversely, the good news is that re-injecting a small amount of expertise into the evolution of ideas may have a substantial beneficial effect. The conformity that misguides people can be dissipated by one knowledgeable individual confidently stating the obvious. As Asch found, when one confederate went against the majority, conformity by the test subjects dropped by as much as 85% (the flip side of this is known as “collective non-compliance”).

More generally, a complement to the CoI discounting approach is re-injecting expertise into the marketplace of ideas. Numerous experiments show that when individuals are copying those around them, they can be directed towards a rational goal when few in the group have a clear direction. Even when each fish is merely swimming in the same direction as the other fish around it, the introduction of one robotic fish, swimming unerringly towards a target at the other end of the pool, can bring the whole school with it.

Conclusion

Founder effects and diluted expertise create a reinforcing cycle that worsens bias, requiring proactive efforts to counterbalance these effects. Continually re-injecting unconflicted expertise into the marketplace of ideas should help, but this is easier said than done. It demands proactive measures from academic institutions, market participants, and legal bodies to safeguard the integrity of the marketplace of ideas. Absent such efforts, founder effects and the dilution of expertise can make bias far worse for courts and agencies. They may think they are relying on actual objective expertise—as that expert seemingly is unconflicted—but the reality is otherwise.

Authors’ Disclosures: the authors reports no conflicts of interest. You can read our disclosure policy here.

Articles represent the opinions of their writers, not necessarily those of the University of Chicago, the Booth School of Business, or its faculty.