If you want to convince people not to trust an inaccurate political post on Facebook, labeling it as satire can help, a new study finds.
Researchers at The Ohio State University found that flagging inaccurate political posts because they had been disputed by fact-checkers or fellow Facebook users was not as good at reducing belief in the falsehoods or stopping people from sharing them.
However, labeling inaccurate posts as being humor, parody or a hoax did reduce Facebook users’ belief in the falsehoods and resulted in significantly less willingness to share the posts.
“We thought that fact-checking flags might work pretty well on Facebook, but that’s not what we found,” said R. Kelly Garrett, lead author of the study and professor of communication at Ohio State.
“It only helped to have flags for satirical posts. This raises some really interesting questions about why people are moved to disbelieve a claim when you tell them it is hoax or satire, but not when journalists or even their peers say there is something wrong with the story.”
Garrett conducted the study with Shannon Poulsen, a doctoral student in communication at Ohio State. The results are published online in the Journal of Computer-Mediated Communication.
The researchers conducted two separate studies.
In the first, involving 218 adults from across the country in early 2018, participants completed a brief questionnaire that included measures of their positions, knowledge and beliefs about several political topics, as well as their political ideology, party affiliation and demographics.
One of the questions asked them how much they believed two inaccurate claims that had been prevalent in social media. One was a falsehood that was more likely to be believed by Republicans (“millions of illegal votes were cast in the 2016 presidential election”) and one that was more likely to be believed by Democrats (“Russia tampered with vote tallies in order to get Donald Trump elected president”).
The researchers also engaged in a subtle deception.
Participants gave consent and then were asked to sign into their Facebook account, granting researchers access to their user profile. But the researchers did not record their login information or access their accounts.
About two weeks later, the participants were contacted again. The researchers told them that they would be presenting them with “real Facebook content” that appeared on their Facebook feeds. But, in reality, the research team created all the posts.
Participants were shown two Facebook posts with the inaccurate statements that they were asked about two weeks earlier (illegal votes cast in the 2016 election and Russian vote tampering).
Each participant received one of three flags at the bottom of each of the posts: One that said it was disputed by fact-checkers, a second that said other Facebook users like them disputed it, or a third that said it was from a humor, parody or hoax site. A fourth group of participants did not receive any flag on their posts.
Results showed that people who received the flags saying fact-checkers or their peers disputed the post still believed the falsehoods just as much as they did earlier. Further, they were still likely to say they would share the false information.
But the people who received the flag saying the post was meant as humor were more likely to change their minds. They were less likely to believe the falsehood and were less likely than others to say they would share it.
“There’s little reason not to label satire,” co-author Poulsen said.
“The best jokes are still funny even when you know they’re jokes. More importantly, labeling can help people who might otherwise be misled.”
Garrett added that the finding isn’t trivial, given that there is a lot of satire being shared on social media.
The researchers did a second study, with a larger (610 people) and more demographically diverse sample. The study was similar, although not exactly the same, and had the same general results, Garrett said.
Although it wasn’t the focus of the study, Garrett noted another important result: There was no “backfire” effect from the Facebook flags, in which people are more likely to believe false information if they are told it is false. Some early research in the area suggested there might be such an effect.
“Flagging falsehoods did not always result in more accurate beliefs, but it never resulted in less accuracy among our participants,” he said.
While most fact-checking flags did not work in this study, Garrett said he isn’t ready to say they can’t be effective. But they may need to be tweaked.
Garrett said he suspects that one reason the satire flags worked is that they told people why a certain statement was false – because it was meant as humor.
“When you just say a claim is false, you haven’t really given an explanation for why,” he said.
“People respond to stories. They want to know why something is being called inaccurate.”
So saying, “This content has been found to be foreign propaganda” might be more effective than just saying, “This content has been found to be false.”
“If you give people a clear explanation as to why you’re calling a statement ‘false,’ that may be more compelling to people,” Garrett said.
The study was supported in part by grants from the National Science Foundation.