Online influencers aren’t in the business of promoting just products anymore. New research finds that micro-influencers are increasingly used to spread political messages and disinformation, without disclosing who paid for it.
The 2020 election has been a busy time for Samuel Woolley, an assistant professor of journalism and project director for propaganda research at the Center for Media Engagement (CME) at the University of Texas at Austin. Woolley is a researcher who focuses on propaganda, emerging media, and the ways in which political groups leverage digital tools to attempt to manipulate public opinion. After spending months tracking down fake news and disinformation campaigns, he is worried that “we have outdated ways of understanding propaganda and disinformation.”
“The thing that concerns me the most about our current battle against disinformation is that we’re much too concerned on the machinations of nation states like China and Russia and not concerned enough about, say, right-wing extremism, or white nationalism, or the individual use of social media platforms for the spreading of defamation, harassment, and hateful speech against protected groups,” he said.
In a recent paper, Woolley and his co-authors explored the role that paid micro-influencers—those with 25,000 followers or fewer—play in spreading disinformation and political messages. Both Democrats and Republicans have worked with influencers during the 2020 campaigns, and politically-aligned influencers have moved in together into so-called hype houses, like Conservative Hype House and Republican Hype House, to collaborate on content for their hundreds of thousands of followers. When former New York Mayor Michael Bloomberg launched his ill-fated campaign for president, his campaign offered influencers $150 to post about why they’d vote for him. Unlike some other posts, influencers working with Bloomberg’s campaign disclosed that posts were paid for by adding disclaimer like “#sponsored by @mikebloomberg” to their captions. Undisclosed posts in support of candidates and certain issues could help manufacture popularity, similar to the way that label-funded payments to radio DJs in the payola scandal helped certain “hit” songs climb the charts.
Concerns about transparency, lack of disclosure, and the spread of disinformation have spurred calls for regulations from experts like Woolley. Even as this practice comes under scrutiny, marketing experts believe that influencers are underutilized when it comes to political campaigns and expect the trend to grow in future elections.
ProMarket recently discussed the rise of micro-influencers, the dangers of disinformation, and potential regulations with Woolley, whose new book The Reality Game: How the Next Wave of Technology Will Break the Truth was released earlier this month.
Editor’s note: This interview was conducted before President Donald Trump was banned by Facebook, Twitter, and YouTube, and before Apple and Google took action to remove Parler from their app stores.
[The following conversation has been edited for length and clarity.]
Q: When people think of disinformation, the first couple of things that come to mind are Facebook and Twitter. But it’s not really limited to just that. What have you seen on other platforms?
There’s a common assumption that most of the disinformation and political propaganda that society experiences online comes from Facebook or YouTube or Twitter. In fact, disinformation is spread across the internet. We see disinformation on smaller social media sites, from Parler to Gab, and even Pinterest or LinkedIn. Oftentimes, some of the smaller platforms, including Pinterest, are particularly good at moderating this misinformation on their platforms. But many of the other smaller platforms on which disinformation and propaganda circulates have a very, very difficult time actually moderating this kind of content. And while we’re shining our spotlight upon Facebook, Twitter, and YouTube and while those companies are actually doing, relatively speaking, quite a lot to try to combat these problems, many other social media sites simply don’t have the resources or the time or the know-how to do this.
Q: Why is Pinterest better at moderating disinformation?
I think Pinterest is better because it’s a very particular type of social media. There’s big differences across social media, from Facebook to Twitter to Pinterest. Pinterest is primarily an image database. The majority of users on Pinterest use it for lifestyle reasons—for design, for art. They curate content related to things that they think are cool, basically. That being said, Pinterest can also be used by people for political purposes—for instance, for posting political memes, for posting disinformative pictures or things like that. Because Pinterest hasn’t had a lot of attention potentially, but also because they are a particular kind of platform, they have been able to be very clear and decisive in their content moderation choices. Pinterest has very strongly moderated content that’s anti-vaccine related. Pinterest has also strongly regulated and gotten rid of content on QAnon. And other platforms, especially some of the big ones, have for a long time waffled on this content. And potentially part of the reason they have waffled on this is because they are afraid of the criticism of the American public, not to mention lawmakers who hold free speech as a primary ideal.
Q: Do you think that one of the reasons that platforms are tackling disinformation more quickly is because they’re worried that if they don’t do it themselves, the government might step in and impose certain regulations on them?
I think that social media companies have been fearful of government regulation since at least 2016. My own team has been sharing research into the problems of disinformation and propaganda on social media platforms with these companies since at least 2013. So they’ve had good knowledge of the problem. Indeed, there has been research into this phenomenon from the computer sciences and other areas of academia since 2010. And so ignorance is no excuse for these companies. They’re some of the most powerful companies in the world, and they should have known that this was going on. But yes, in some ways, they’re trying to do some of this now, because they do fear government regulation.
At the same time, you also have Facebook publishing full-page ads in The Economist, saying things like: we’re open to government regulation. And so I think it’s gotten to a point now where what we’re seeing, i.e., the regulation of Donald Trump’s speech is actually happening more because of the content— he is spreading misinformation about the integrity of the voting system—rather than it being about just censoring Donald Trump, specifically. Because Donald Trump has spread many, many lies on Twitter and this just happens to be the first time that [many] of those lies are actually being, in some way, shielded or censored by Twitter.
Q: Your research finds that there is now an overlap between lifestyle influencers who promote products like candles or yoga clothes and political influencers. Can you tell me a little bit about that?
We’ve seen a progression away from more impersonal methods of manipulation online, such as bots, or sock puppet profiles, and towards a more authentic, what marketers called relationally organized, mode of reaching out to voters. Influences are key to that strategy. The influencer apparatus, obviously, has existed for several years and been used to sell many, many commercial products. And now it is being leveraged by political groups on both the right and the left in the United States in attempts to sell politics.
So, in the case of our research, we’re looking into the ways in which influencers— particularly micro or nano influencers, who have followings of under say 25,000 or 10,000 on platforms like Instagram— are being paid by super PACs or by companies employed by political organizations, in order to spread particular types of messaging, giving it the illusion of more authenticity, giving it the illusion of more personal nature. And the idea that the political campaigns have is that these people already have a purpose built-in that works. So unlike bots, which sometimes can speak into the ether, though not always but sometimes can, these people are actually speaking to captive audiences, oftentimes in particular states where politicians have a very big interest in getting more votes.
Q: Can you give me an example? Say my neighbor is an aspiring influencer and she has a following of 10,000 people. All of a sudden she starts talking about politics. Is it like that?
A great example of this is the healthy living community. There’s many influencers who are part of this community who advocate for different diets, things like whole 30 or juice cleanses, or different methods of exercising. And what we’ve seen in that community in particular has been sort of this like scope creep, where many of the people are suddenly seeming to be speaking about medical issues. In particular, they are seeming to be speaking out in concern against mask wearing or against vaccines, things like that. And in some circumstances, we’ve been able to trace these kinds of behaviors back to paid campaigns.
Another example is that the influencers, who are oriented towards nature and hiking and being outdoors, are now talking a lot more about global warming and their concerns about the climate. Then we’ve been able to track back and to look at firms like Main Street One or other organizations working on the left to actually pay those people to spread that kind of content.
Q: Whenever an influencer on Instagram is trying to convince me to buy something, they have to disclose that it’s an ad. If they’re getting paid by someone to spread a certain message, do they have to disclose that too? What are the regulations around this?
The terms of service of these social media platforms clearly state an expectation that people who are advocating for corporate products and are paid to do so—whether it’s your money, or kickbacks in some other form, like products or stuff—that they disclose that. And also the Federal Trade Commission has looked into this, as well.
That being said, much of the paid political influencers that we see operating on [these platforms], do not disclose the fact that they’re working for political campaigns. And that’s because the regulation on this is very thin. When Citizens United got passed, there’s been a huge amount of question marks about what is permissible in the political campaigning space, particularly when it comes to spending and communication. And so a lot of these folks, simply put, are not saying that they’re being paid. Some are. And there is messaging from the companies that employ them to say that they want them to disclose that they’re being paid. But in other circumstances, we’ve run across instances where the company’s paying them kind of don’t want them to say that they’re being paid, because it makes what they’re saying seem less authentic.
Q: Do you think this is going to become a bigger trend in the future?
I think we’re going to see a huge problem with influencers that are being paid off of the social media platforms and then spreading their paid messaging on the platforms. This is going to be the newest, most pressing issue that the social media platforms and other online companies are going to have to deal with. And much like what we saw with bots, and sock puppets, these grow out of the commercial sphere. And so oftentimes, these types of behavior or these marketing tactics begin within the commercial sphere, and they spread to the political sphere. And when they spread to the political sphere, they get much more dicey because of questions of free speech.
Q: Have any of these paid campaigns been linked to disinformation?
We haven’t been able to track from start to finish an influencer campaign that’s spreading disinformation back to a particular paid entity. That being said, we have seen the hallmarks of coordinated disinformation campaigns amongst influencers, who are spreading conspiracy theories and other similar sorts of content. Those hallmarks are things like replicate repetition across multiple accounts of the same phrases or the same kinds of exact same kinds of ideas, particular timing of posts, or like posting within a particular network. One of the areas that we’ve been looking into, for instance, is the rise of social media influencer pods, which are these engaged entities that are organized so that they are liking one another’s content, retweeting one another’s content, sharing one another’s content, in order to give it the illusion of more popularity. And many of those pods and hype houses, which are also tied to them, are affiliated with secular political causes. And many of those folks are spreading very questionable concepts.
It’s a brave new world of this stuff. It’s the next frontier of propaganda. I thought it was bad when bots were doing it. This is worse.
Q: Most people think of influencers as 20-year-olds moving together into houses in Los Angeles and getting in trouble. They don’t really think of them as potential tools to spread disinformation about health or politics, but that’s basically what you are saying is happening.
Exactly. This is a cause for serious concern. It’s a cause for concern as well, because yes, some of these influencers don’t fit the traditional profile of what we think of as an influencer. What we’re actually talking about in these circumstances are people of all ages and from all sorts of backgrounds. At the same time, though, there are many of those 20-something influencers that are also being paid by political organizations. And that’s even more worrying in some ways, because many of them have openly told us that they particularly target their political influence campaigns at people under the age of 18 and those are people who are forming their political identities and who are particularly susceptible to false information. Those are people we should be protecting. And that’s why regulation has to exist.