Depending on who you listen to, fake news is either the cause of all the world's ills or merely a side-effect of a self-reinforcing media system. But either way you see it, fake news is bad, and Facebook, being one of the most relied upon sources for news and information on the planet, is playing a significant part in its dissemination and impact.
Zuckerberg and Co might not necessarily agree, but they do recognize the need to tackle the problem where they can, which they've been doing through new warning labels and indicators to help curb the spread of disputed content.
And now, Facebook has revealed their latest tool in the battle against fake news, with a new set of tips to help users identify false reports, which will be highlighted at the top of people's News Feeds over the next few days.
Notice the slight wording variation here? Facebook's opted to go with 'false' news, as opposed to 'fake'.
As explained by TechCrunch:
"The company tells me this is because "fake news" has taken on a life of its own, and "false news" more accurately communicates that it's talking about intentionally false content that tries to be confused with legitimate news. After all, Donald Trump has begun labeling as "fake news" any opinions or facts with which he disagrees."
When clicked, users will be taken to a new resource center with a listing of tips to help identify fake - sorry - false reports.
As you can see, the tips include being wary of headlines with all caps and/or exclamation points, those with mimic URLs (like 'cnn-trending.com' as opposed to 'cnn.com') and items using manipulated images.
Some have noted that the advice may not be as simple for the less tech-savvy to understand, which could negate its effectiveness, but either way, it provides additional guidance on what fake news looks like, which should help provide at least some educational benefit to Facebook users.
Facebook worked with non-profit group First Draft to create the guidelines as part of their ongoing efforts to improve news literacy and ensure the platform is free of misleading reports. The hope is that this, in combination with their other measures, will help raise awareness of such reports, reducing the spread of fake news at the source.
But the bigger problem may be that fake news just works, particularly on a viral sharing medium like Facebook, where headlines are all it takes to generate reach.
For example, this report was published by a prominent Australian newspaper recently.
As you can see, the post generated a heap of engagement - over 1,200 shares, with most of the more than 1,500 comments relating to the dangers of such spiders and people's personal experiences. But that headline isn't true. White-tail spider bites are certainly not pleasant, but they don't generally lead to amputation - a fact which is clarified in the second sentence of the story.
"Doctors are now understood to be considering the more likely cause of Terry Pareja's illness to be a bacteria from Asia."
This is further underlined in the rest of the article.
"But Mr Pareja doesn't remember being bitten by a white-tailed spider, and even if he had been, it is unlikely its venom would have caused the necrosis, an expert says."
In fact, overall, the article is about the mis-attribution of spider bites for such injuries - but had they gone with the headline "Man loses legs to bacterial infection" there's no way it would have generated as much discussion and reach.
In this sense, it's hard to fault news organizations for using this tactic - and in fairness, this report was apparently published based on limited information to begin with, which they later clarified, but they could have updated the Facebook post to reflect the newly discovered facts. But why would they? As noted, this headline will get way more reach, and more clicks as a result.
Given that the system incentivises viral sharing, you can see why fake reports proliferate. Hopefully Facebook's latest measures will continue to provide disincentives for publishers to follow this path.