Back in 2016, at the peak of the blowback related to how Facebook had been used to distribute false news in the lead up to the US Presidential Election, Facebook announced a new measure which would flag potentially misleading stories with a new warning, showing that the validity of that report had been disputed by third-party fact checkers.
Facebook partnered with a range of organizations to clarify highly shared, questionable reports, and has since continued to refine its systems as part of its broader effort to stem the flow of damaging reports before they gain full traction.
But has that process worked? Are people now less likely to share questionable posts and updates on Facebook, reducing their impact?
According to a new study, these labels are effective, at least in some capacity.
The study, conducted by the University of California, used a sample set of 500 participants, with differing views across the political spectrum. The researchers sought to determine the response of participants to such labels, and how that impacted their behavior.
As per the report:
"We find that the flagging of false news may indeed have an effect on reducing false news sharing intentions by diminishing the credibility of misleading information. [...] This study shows that flagging of false news on social media platforms like Facebook may indeed help the current efforts to combat sharing of deceiving information on social media."
The experiment saw participants shown fake reports, based on actual shared content on Facebook, along with the associated warning labels.
The researchers then measured participants likelihood to share such content, based on varying parameters, and found that such flags did have an impact.
"...this study found that the flagging of false news had a significant effect on reducing false news sharing intentions. The study showed that respondents who saw a fabricated Facebook post with a warning label had lower intentions to share that content than those who did not see the flag."
The reports' authors also offered a light critique of Facebook for reducing the use of such flags as a measure:
"Amid the public concern over the rise of false news on Facebook during and after the 2016 U.S. presidential election, the social media platform started to flag misleading stories as disputed by fact‐checking organizations. At the end of 2017, Facebook replaced disputed flags with related articles shown below a misleading post. The company argued that related articles were more effective than disputed flags in discouraging users to share false news. However, the platform did not share detailed data about the effectiveness of the warning labels. Beyond that, a 2018 Gallup survey found that more than 60% of U.S. adults said they were less likely to share stories from sites labeled as unreliable. Indeed, an increasing number of initiatives worldwide are providing trustworthiness ratings for online news outlets and showing readers labels that may impact the credibility of news content."
Clearly, the report's authors believe that their findings support the increased use of such labels - but there is also a problem with that process: scale.
In an unrelated report, independent U.K. fact-checking organization Full Fact has published a new overview of its experiences in working with Facebook as one of the fact-checking organizations.
Full Fact published 96-fact checks in the first 6 months of the program, and was paid $171,800 by Facebook for its work - equating to around $1,790 per fact-check. But Full Fact notes that one of its biggest concerns with the program was scale, and being able to counter enough fake news reports to have any significant impact.
"Facebook’s focus seems to be increasing scale by extending the Third Party Fact Checking program to more languages and countries (it is currently working with fact checkers across 42 languages worldwide). However, there is also a need to scale up the volume of content and speed of response. This, again, is an industry-wide concern relevant to other internet companies too."
That, at more than 2.4 billion users, is a key issue for Facebook, and one with no simple answers - even if Facebook could police, say, 10% of the reports shared on the platform each day, the amount of labor required is huge, and the impact, in broad terms, would likely be minimal.
Facebook is still looking at technological solutions to the problem, and has been working on additional penalties and enforcement measures to stamp out fake news. In combination, all of these efforts are having an impact, but the issue remains significant.
A broader labeling program could be effective, if Facebook can work out a better way to implement it on a larger scope.