NewsGuard is a new startup that relies on reporting instead of artificial intelligence to rate the trustworthiness of news sites: “It became obvious to us that the platforms’ fumbling around with artificial intelligence cannot solve the problem.”
With less than two months to go until the US midterm elections, social media companies want you to know they’re doing everything they can to stem the proliferation of false news stories and propaganda on their platforms.
In a Washington Post op-ed and a subsequent 3,200-word Facebook post, Mark Zuckerberg promised that Facebook is “better prepared” to fight the spread of misinformation than it was in 2016. One its main initiatives to combat the spread of false news, however, a fact-checking partnership with five news outlets—four nonpartisan organizations and the conservative magazine The Weekly Standard—came under fire this week, after the progressive news site ThinkProgress accused Facebook of allowing The Weekly Standard to effectively censor one of its stories by labelling it as “false,” a rating that means a loss of 80 percent of future traffic.
Separating true from false within an endless stream of content is a daunting, virtually impossible task, particularly when it involves fact-checking individual memes. It also raises numerous ethical questions—especially since each platform has its own opaque algorithms for determining, based on unspecified criteria, what constitutes “fake news.” In recent months, the opacity of these mechanisms has led to repeated charges of bias, particularly from conservatives.
Meanwhile, multiple reports show that for all of the platforms’ efforts, fake news stories are still rampant on social media, leaving users increasingly vulnerable to manipulation. Earlier this year, a survey by Gallup and the Knight Foundation showed that Americans feel the news ecosystem is “becoming harder to navigate.”
Enter NewsGuard, a new startup that aims to combat fake news using a previously underutilized tool: journalism. Launched earlier this year by veteran journalists and entrepreneurs Steven Brill and Gordon Crovitz, NewsGuard rates the trustworthiness of roughly 7,500 news sites—from major national news outlets to blogs— that according to the company account for 98 percent of the news content consumed and shared by Americans online. It does so by employing dozens of reporters and analysts that investigate the practices and content of each site and then divide them into two classes: “green” icon for sources that generally uphold basic standards of transparency and accuracy in reporting, and a “red” icon for sites that routinely don’t.
These icons are accompanied by what the company calls “nutrition labels”: reviews written by its reporters of each site’s track record and policies based on a set of nine criteria that when taken together indicate a site’s trustworthiness. Among these criteria are an outlet’s corrections policy, disclosures of ownership and funding, the truthfulness of its headlines, and whether it correctly labels advertising. The sites’ overall rating is then calculated according to a points system, which the company details in full on its site. Each “nutrition label” includes a detailed explanation of why the site received a certain rating, comments by the rated site (if they exist), a full list of sources, and links to the bios of the analysts that rated them. The New York Times, for instance, is rated “green,” as are Huff Post, The Intercept, Fox News, and the Daily Caller. Breitbart, RT, Infowars and Sean Hannity’s website are rated “red.”
“It’s true that the advertising-based business model created this problem, but I think at this point the platforms recognize that they have a real economic incentive to create a safer, better informed environment.” |
Brill is the founder of The American Lawyer magazine, Court TV, and the Yale Journalism Initiative and the author of 2015’s “America’s Bitter Pill.” Crovitz is an entrepreneur and the former publisher of the Wall Street Journal. In 2009, the two founded Journalism Online, whose Press Plus tool helped news sites set up paywalls (they sold the company in 2011).
NewsGuard, they said in a recent conversation with ProMarket, is the result of their deep skepticism toward the methods utilized by Silicon Valley companies to combat fake news, which mostly rely on artificial intelligence that can be easily gamed.
“It became obvious to us that the platforms’ fumbling around with artificial intelligence cannot solve the problem of unreliable or false news because the goal of fake news is to look and feel and act like real news,” said Brill.
Case in point: The Denver Guardian, a fake news site that claimed to be “Denver’s oldest news source,” even though it was set up shortly before the 2016 presidential election solely for the purpose of spreading a single false story alleging that an FBI agent investigating Hillary Clinton had been murdered. The story was completely bogus but went viral, at one point reportedly gaining 100 shares per minute.
“If you decide to come up with a new website called The Denver Guardian, you want to make it look like The Denver Post, which is a real newspaper. To a machine they look identical, so the AI is challenged in this area,” said Crovitz.
These days, with fake news sites like The Denver Guardian popping up on a near-daily basis, NewsGuard aims to identify and rate these sites in real time. “We have what we call a ‘SWAT team’ of analysts who will get alerts to stories that are trending on social media and come from a website we haven’t rated yet,” said Crovitz.
Rating the general trustworthiness of news outlets, argued Brill, is much more feasible than fact-checking individual articles. “You’re covering a lot more ground in a lot more reasonable context if you do sites,” he said. “The downside is you don’t get the Judith Miller writing in the [New York] Times that Saddam had weapons of mass destruction.”
Backed by venture capital from investors such as Publicis Groupe and the Knight Foundation, NewsGuard is currently rushing to complete its ratings in time for the midterms. With a staff that currently includes 50 journalists (25 full-time and 25 on a freelance basis, working from offices in New York and Chicago), the company launched its first product—a browser extension for Microsoft Edge, Google Chrome and Firefox—in August. The launch was accompanied by the announcement of a partnership with Microsoft for a news literacy program in public libraries. So far, the ratings are limited to English-speaking sites, but NewsGuard’s plans for after the midterms include expanding into Spanish-language news outlets and into other countries as well.
The extension is available for users at no charge. Instead, NewsGuard’s business model relies on licensing its ratings and nutrition labels to large tech companies, who would then incorporate into their platforms (apart from its partnership with Microsoft, the company is currently negotiating with other tech platforms). While their advertising-based business model means that platforms like Google and Facebook benefited enormously from the spread of fake news and other forms of divisive content, Brill and Crovitz believe platforms now see misinformation as a serious threat. “The platforms, I think, have recognized that they have a trust issue and there are metrics that indicate less engagement and trust with those platforms,” said Crovitz. “It’s true that the advertising-based business model created this problem, but I think at this point the platforms recognize that they have a real economic incentive to create a safer, better informed environment.”
The platforms’ reluctance to become “arbiters of truth,” the two believe, provides a company like NewsGuard with a unique business opportunity. Faced with growing backlash from both the right and the left whenever they make editorial choices, partnering with a service like NewsGuard could allow platforms to get out of the news business altogether. “Instead of hiring thousands of people, they can hire us. They can’t hire real journalists, and they don’t want to. They don’t want to be editors. They’re even leery about dealing with Holocaust-denier sites. Their approach seems to be ‘should we allow it, or take it down?’” said Brill. “Or maybe they’ll secretly downgrade it, so that it doesn’t get shared as much. What does that even achieve? You spread the lie less efficiently? It’s ultimately untenable. It’s also untenable to tell newspapers and websites ‘We’re rating you for how trustworthy you are but can’t tell you what your rating is.’ Everything about us is totally transparent.”
He added: “Our approach is: Don’t take anything down—instead, explain and allow people to decide for themselves.”
Platforms have already outsourced some of their fact-checking to third parties. Aside from the aforementioned Facebook fact-checking partnership, Google has been using Wikipedia articles to discredit certain conspiracy theories. “To their credit, some of the platforms have at least used Wikipedia entries to give their readers more information. The problem with Wikipedia is that anyone can edit it and that only about 23 percent of the sites have Wikipedia entries,” said Crovitz.
“We want people to game our system. We want people to look at the criteria and say, ‘Gee, if only I can check that box I can get a higher grade.’ That’s good.” |
Many outlets, trustworthy or not, don’t necessarily make public some of the information that NewsGuard relies on in its ratings. This is where the reporting comes in, and according to NewsGuard, it is already changing behavior. “In many cases, we’ve actually found when one of our analysts called to ask about one of the criteria, a website would say ‘Actually, we were thinking of improving our corrections policy, we’ll do it now,’” said Crovitz.
“We want people to game our system,” chimed Brill. “We want people to look at the criteria and say, ‘Gee, if only I can check that box I can get a higher grade.’ That’s good. If Facebook has a system where they rank the reliability of news operations or how much they’re trustworthy, but they don’t tell you anything [about how the decision was made], how are you supposed to get better?”
Brill and Crovitz are particularly encouraged by the recent Gallup/Knight Foundation((As previously mentioned, the Knight Foundation is one of NewsGuard’s investors)) survey, which tested the effectiveness of NewsGuard’s rating system by asking participants to rate the accuracy of twelve news headlines—six real, six fake. The survey showed that the system is working: across the political system, users perceived news sources as more accurate when they had a green icon next to them. “Even the most partisan people, when they see a red icon, they have less confidence in the [outlet]. They don’t want people to think of them as suckers for false information,” said Crovitz.
Nevertheless, any endeavor that rates the trustworthiness of news sources online these days is at risk of being viewed as politically motivated. “Look, in a world in which 15 percent of the country thinks Barack Obama was born in Kenya and another 15 percent think that 9/11 was an inside job by the Bush administration, we’re not going to be accepted with ticker tape parades by 100 percent of the country. We’re ready for it,” said Brill. The way NewsGuard means to nullify potential attempts to paint NewsGuard as politicized, he added, is to be transparent, readily admit when mistakes have been made, and constantly check for potential biases. “It doesn’t mean we’re going to be perfect. It doesn’t mean we’re not going to make mistakes. But certainly it means we’re going to be more credible than an algorithm that comes up with these decisions and can’t tell you how or why.”
Disclaimer: The ProMarket blog is dedicated to discussing how competition tends to be subverted by special interests. The posts represent the opinions of their writers, not necessarily those of the University of Chicago, the Booth School of Business, or its faculty. For more information, please visit ProMarket Blog Policy.