In the two years since fake news on the Internet became a full-blown crisis, Facebook has taken numerous steps to curb the flow of misinformation on its site. Under intense political pressure, it’s had to put up a fight: At the peak in late 2016, Facebook users shared, liked, or commented on an estimated 200 million false stories in a single month.
Now, in one of the first studies of its kind, Stanford economist Matthew Gentzkow is shedding light on a key question: Are Facebook’s countermeasures making a difference?
It looks like they may be, according to findings detailed in a new working paper by Gentzkow and co-authors Hunt Allcott and Chuan Yu.
From December 2016 to July 2018, Facebook user interactions with content from sites flagged as producers of false stories fell 65 percent. Over the same period, engagement with these same stories on Twitter actually rose, suggesting that the trend did not simply reflect declining interest in such stories or declining production by those sites. And the timing of the drop in engagements coincided with changes by Facebook such as updating its news feed algorithm, moving to block ads that promote deceptive content, and instituting a fact-checking program.
“These patterns suggest Facebook’s efforts to limit the spread of misinformation among its users may be having a meaningful impact,” says Gentzkow, an economics professor and senior fellow at the Stanford Institute for Economic Policy Research (SIEPR).
Gentzkow’s study is especially timely with the midterm elections less than two months away. The role that fake news sites may have played in 2016 voting is at the heart of a federal investigation, several new state laws and plenty of public hand-wringing. Earlier this month, executives from Facebook and Twitter testified before Congress for the third time in less than a year as part of its inquiry into election meddling by Russia and other foreign groups.
While there is no proof that fabricated news stories — claims, for instance, that the pope had endorsed Donald Trump for president — changed votes in 2016, Gentzkow and Allcott previously found that many people who read deceptive articles in the run-up to the election believed them.
Gentzkow cautions that there are other potential explanations for the decline in the volume of fake news stories on Facebook through the end of July, and that it’s possible that the numbers may trend back up as we approach the November elections.
Also, the sheer volume of fake news on Facebook remains high. “Despite evidence that Facebook’s efforts may be working,” says Gentzkow, “its users engaged with fake news sites 70 million times in July. That’s a very big number and tells me that Facebook continues to play an important role in the spread of misinformation online.”
The world of fake news
For the study, Gentzkow and his collaborators — Allcott of New York University and Yu, a Stanford PhD candidate and former predoctoral research fellow at SIEPR — analyzed 570 websites that were identified as producers of false news on lists posted by PolitiFact, Buzzfeed and others. Then they pulled traffic data from January 2015 to July 2018 and counted the times Facebook users interacted with deceptive content — for example, by sharing or liking a story.
They found that monthly interactions with the fake news sites rose steadily on Facebook for two years before peaking at 200 million in late 2016 and falling to 70 million this summer.
The study doesn’t look at what exactly Facebook might be getting right in its attempts to combat fake news. But Gentzkow says a series of algorithm changes, such as featuring posts from users’ friends and family more prominently than other public content, may be working.
Gentzkow is hesitant to draw definitive conclusions about the data for a reason. It’s possible, he notes, that other factors beyond Facebook’s control contributed to — or even drove — the drop in the volume of misinformation on the platform. For example, once the election was over, the demand for false news stories may have fallen as users lost interest in highly partisan stories. It’s possible, too, that the data missed new sources of fake news, including sites that switched to a different domain.
A surprising comparison to Twitter
One way the researchers addressed the study’s potential limitations was to track shares of the same false stories on Twitter from December 2016 to July 2018. If neither Facebook nor Twitter did anything to counter misinformation, then both would hypothetically experience changes in user interactions with the fake news sites to the same degree. Similarly, if a creator of false news suddenly changed its domain name to avoid detection, traffic sourced to the original site would decline for both social media platforms.
Surprisingly, the data showed that while both Facebook and Twitter saw increases in user interactions with false news leading up to the 2016 election, their engagement numbers diverged sharply over the next 18 months. Facebook’s number of monthly false news engagements fell by 130 million over the next 18 months, while Twitter’s continued to rise.
And when the researchers double checked for potentially similar patterns in user engagement with other news, business or culture sites on the two social media platforms, they found that interactions remained relatively stable during the same period. There were no dramatic swings.
“This tells us that something happened on Facebook to slow the diffusion of misinformation,” says Gentzkow. “It’s a necessary first step to a better understanding of the problem of fake news online and how to stop it.”
Next up, he says, is a follow-up study that looks at how users engaged with news on Facebook and Twitter from mid-summer through year end.
Whoaa! This article takes for granted that we all agree on what “fake news” means. A great deal of false information is promulgated by our own government’s war machine, and printed by the established press without investigation. But this is never branded “fake news”. The clearest example was the story about weapons of mass destruction in Iraq, 2003. Meanwhile, alternative news sites that do real investigation to offer us a truth that the New York Times won’t print get branded as “fake news” and banned from Facebook. This is not progress — it’s censorship.
More recently, the press has embraced the narrative that “Russia interfered with our elections.” This story has been repeated so many times that any attempt to deny it will be branded “fake news”. But there never was any evidence to support it, and standing against it is the word of Julian Assange that the DNC email leak came from within the US, not from Russia. In particular, there is powerful circumstantial evidence that the emails were downloaded to a thumb drive by Seth Rich https://www.thenation.com/article/a-new-report-raises-big-questions-about-last-years-dnc-hack/
Do we really want an internet that carries just one version of the facts, vetted for us by The Experts?