Facebook has released a new blog post which outlines how its efforts to stop the spread of fake and misleading news on its platform are working. Though ironically, given the various data-sharing scandals and metrics errors The Social Network has also reported of late, many will no doubt see this, itself, as a fake news report.
According to Facebook, three independent analysis reports - conducted by researchers from Stanford University/New York University, the University of Michigan, and French newspaper Le Monde - have all come to the same conclusion: Facebook's efforts to limit the spread of fake news are working.
"Using different methodologies and definitions of false news, all three reports find that the overall volume of false news on Facebook is trending downward, and as [one of the reports] notes, “efforts by Facebook following the 2016 election to limit the diffusion of misinformation may have had a meaningful impact.”
Facebook says that it did not fund or provide data for this research - the insights were conducted on publicly available datasets. Which is an important clarification, though you'd think it'd still be more limited in scope than what Facebook itself could actually provide.
Either way, the reports show, through their research, that fake news is getting less reach across The Social Network. One report finds that "Facebook now has 50% less “Iffy Quotient content” than Twitter, and has returned to its early 2016 levels", while another says that "Facebook engagement with “unreliable or dubious sites” has halved in France since 2015."
That's good, right? That's a good news story - if Facebook's winning the war against fake news and misinformation, then we're getting closer to eliminating that as a societal concern, exacerbated by social media - and when you combine that with the ongoing crackdown on questionable sources which have been sharing such content on social for political gain, and new research from Pew that shows consumers are also growing increasingly wary of the reliability of news items on social media, it paints a pretty positive picture for the future of online content sharing, and its capacity to inform society.
Maybe we're not as misinformed, and questionable political movements are not being fueled, at the rate we've previously suspected.
Except, that's not the full story.
As noted, Facebook says that this research has been conducted on public data sets, which means that it excludes a huge amount of non-public sources.
For example, the big four messaging apps now have more users than the big four social platforms.

Over time, we've seen more social media interactions shifting away from public forums, like social networks, and more into messaging apps and groups, something Facebook's now working to encourage in order to tap into usage trends.
That means that a heap of the discussion Facebook's referring to was not visible to the researchers at all - and with more of those conversations switching into private forums, largely to avoid the external, public scrutiny and debate, it's pretty much impossible to say whether Facebook's efforts have helped, at least to the level the company is suggesting.
Facebook could provide real data here - Facebook could show internal reporting info on reports of fake news, if there were a chart showing fake news reports over time, and you could see spikes and declines, that might give a better picture of real impact, though even then it'd be skewed by increasingly private conversation.
That's not to denigrate Facebook's efforts - absolutely, the platforms themselves should be doing all they can to lessen the impact of fake news, and the research here does provide some indicator that their new measures are working, at least to some degree. But the data may not be as transparent, or indicative, as Facebook suggests. And at a time where the company's integrity is being questioned more than ever, I'm not sure this report will convince people that Facebook is actually eliminating the fake news problem.
I mean, it's not hard to find fake news reports on Facebook. Misleading videos are seeing millions of views, questionable stories are getting widely shared - it's not hard to find examples of such if you go looking. Of course, Facebook will never entirely remove such content from its network, but it's difficult to measure the actual impact of such without using the full internal dataset and insights.
Basically, it's a positive that independent research has found that fake news stories are seemingly seeing less engagement on Facebook. But Facebook itself could provide more valuable, in-depth insight on this, if it chose to.
The fact that it hasn't may lessen the impact of such reports.