In Chapter 6 of our Digital Platforms and Concentration conference ebook, a research psychologist who, with an associate, discovered the search engine manipulation effect (SEME) in 2013—one of the largest behavioral effects ever identified—argues that Google’s search engine may have determined the outcomes of a quarter of the world’s elections in recent years.

Editors’ note: See also our 2017 conference volume Is There a Concentration Problem in America? Other chapters from this year’s ebook can be read at the following links:

In recent years, my associates and I have quantified the extent to which online digital platforms can shift opinions and votes without people knowing this is occurring and without leaving a paper trail. Randomized, controlled experiments conducted with more than 10,000 people from 39 countries suggest that one company alone—Google LLC, which controls about 90 percent of online search in most countries—has likely been determining the outcomes of upwards of 25 percent of the national elections in the world for several years now, with increasing impact each year as Internet penetration has grown.

The Search Engine Manipulation Effect (SEME)

In a study published in the Proceedings of the National Academy of Sciences USA (PNAS) in 2015, we reported the discovery of what we called the search engine manipulation effect (SEME), which is one of the largest behavioral effects ever identified. The study showed that when undecided voters conduct online searches in which one candidate is favored in search rankings—that is, when high-ranking search results link to web pages that make that candidate look better than his or her opponent—the preferences of those voters shift dramatically toward the favored candidate after just one search—by up to 80 percent in some demographic groups.

This shift occurs because of the enormous level of trust people have in Google’s search results, which people believe are entirely impartial, unlike what they see on television or read in newspapers. Our research also demonstrates that this belief is reinforced by a daily regimen of operant conditioning in which routine searches for simple facts invariably generate the correct result in the highest-ranking search position. The strong trust in high-ranking search results impacts what happens when people conduct a search on a complex issue on which they are trying to formulate an opinion or make a decision: where to holiday, what kind of car to purchase, or even whom to vote for. When conducting an online search for information about such matters, people put inordinate trust in material that is ranked high in search results; indeed, 50 percent of all clicks go to the top two search results. We have also demonstrated that the shift in opinions and voting preferences increases when people are exposed repeatedly to differing search results favoring one viewpoint.

We have now demonstrated the power of search rankings to shift votes and opinions in the context of four national elections: the 2010 federal election in Australia, the 2014 Lok Sabha election in India, the 2015 general election in the United Kingdom, and the 2016 election for US president. One disturbing finding of such research is that people show little or no awareness that they are viewing biased search rankings—even when those rankings are strongly biased. In the Lok Sabha experiment, conducted with more than 2,000 undecided voters throughout India during the voting process, 99.5 percent of the participants in the study showed no awareness that they were seeing biased rankings. SEME’s virtual invisibility makes it an especially disturbing and dangerous form of manipulation, because when people are unaware that they are being influenced, they tend to believe that they are making up their own minds. Because search rankings are ephemeral and, more and more, customized to the tastes of the individual, they also leave no paper trail, making them nearly impossible for authorities to trace. Perhaps even more disturbing, we now know that that the few people who can detect bias in search results shift even farther in the direction of the bias—possibly because they see that bias as a form of social proof.

Evidence for Favoritism in Search Results

Is there any evidence that Google’s search rankings are actually biased toward one candidate or another? Early in 2016 my team and I developed and deployed a system for tracking ephemeral search results on Google, Bing, and Yahoo, and we used this system to track election-related searches for nearly six months before the November election. Using this new system, we were able to preserve the results of 13,207 election-related searches, along with the 98,044 web pages to which the search results linked. From this archive we learned, among other things, that pro-Clinton bias was especially evident in Google’s search results, that bias appeared in all ten search positions on the first page of search results, and that pro-Clinton bias was greater for some demographic groups than for others.

We learned, among other things, that pro-Clinton bias was especially evident in Google’s search results, that bias appeared in all ten search positions on the first page of search results, and that pro-Clinton bias was greater for some demographic groups than for others.

That Google sometimes favors one cause, candidate, or company in its search results is also indicated by a two-year investigation by the European Commission. In June 2017, the Commission concluded that Google had systematically favored its comparison shopping service in its search results and that such favoritism did great damage to competing services. As a result, the Commission levied a $2.7 billion fine against Google, which Google has since paid. Both Russia and India have also levied fines against Google for displaying search results that favor Google’s products and services over those of its competitors. US courts, guided in part by Section 230 of the Communications Decency Act, have meanwhile given Google carte blanche to rank search results any way it pleases—even to demote or remove competing companies from its search results. Some courts have ruled that Google is simply exercising its “free speech” rights by doing so.

The Search Suggestion Effect (SSE) and Other Sources of Online Influence

In addition to continuing our research on SEME (which has now been replicated by at least two other research groups), we are investigating four similar effects—all of which, like SEME, shift opinions dramatically, invisibly, and without leaving a paper trail.

We will soon (in late April, 2018) be presenting the results of a new series of experiments demonstrating the power of what we are calling the “search suggestion effect” (SSE). The experiments show the power that search engines have to begin shifting opinions from the very first character people type into a search bar. They show, specifically, that search suggestion manipulations can shift a 50/50 split among people who are undecided on an issue to an astounding 90/10 split after just one search—again, with no one being aware that he or she has been manipulated. They also explain, among other things, why Google was apparently suppressing negative search suggestions for Hillary Clinton during the summer of 2016. Our experiments show that a single negative (“low valence”) search suggestion can attract 10 to 15 times as many clicks as a neutral or positive suggestion—yet another example of what is known in several academic fields as “negativity bias.” Differentially suppressing negative search suggestions for one candidate (or one cause, or one company) is, it turns out, an easy way of directing millions of people toward positive information about the candidate you support and toward negative information about the opposing candidate.

Protecting Users from High-Tech Manipulation

In late 2017, my associates and I published a study showing how alerts and warnings can be used to suppress SEME to some extent. We do not believe, however, that alerts, warnings, or education of any sort can suppress SEME and similar manipulations completely. We also do not believe that laws, regulations, or antitrust actions will be able to protect users adequately from such manipulations. Legal apparatuses move too slowly, in our view. Driven by recent revelations about the dissemination of fake news stories and Russian-placed ads on digital platforms before the 2016 election, some authorities are now turning their attention toward the corporate policies and algorithms that allowed such things to occur.

But technology moves so quickly, in our view, that regulators and lawmakers will always be years behind the curve. Just as we are now learning about the full power that Google has to control opinions with its search engine, Google is moving away from the search engine model of surveillance and control while rapidly moving toward more powerful tech: encouraging people to place “Home” devices in every room of their domiciles. These new devices record sound 24/7 and give people simple answers to their questions.

We are now in the process of quantifying the impact of giving people those simple answers—an effect we call the “answer bot effect” (ABE). Because Google is now providing all the content that Siri provides to Apple customers, Google’s ability to shift opinions, purchases, and voting preferences will continue to expand in coming months and years, even if its search engine becomes regulated to some extent.

We believe that the only effective way of protecting people from the extraordinary manipulations that new technologies are making possible is by establishing a worldwide network of passive monitoring systems—in other words, of scaling up the type of tracking system my team and I developed in 2016. The European Commission recently awarded €10 million to two consulting firms to develop a system for monitoring Google’s search results, specifically to track compliance with Commission orders. I am now working with colleagues from Princeton University, UCLA, MIT, King’s College London, and elsewhere to implement large-scale systems that will monitor a wide range of online ephemeral stimuli, not just search results. With systems like this in place, it will be possible to detect online threats swiftly, with reports issued as appropriate to journalists, legislators, regulators, law enforcement agencies, and antitrust investigators.

Such systems, I believe, will force online monopolies to be accountable to the general public and, in so doing, will protect human freedom and the democratic system of government. Without such systems in place, I fear that both democracy and human freedom will become little more than illusions. As British economist Kenneth E. Boulding warned in the 1950s, “A world of unseen dictatorship is conceivable, still using the forms of democratic government.” Are we already living in such a world?

Robert Epstein is a Senior Research Psychologist at the American Institute for Behavioral Research and Technology.

Disclaimer: The ProMarket blog is dedicated to discussing how competition tends to be subverted by special interests. The posts represent the opinions of their writers, not necessarily those of the University of Chicago, the Booth School of Business, or its faculty. For more information, please visit ProMarket Blog Policy.