To say that Facebook has been under fire over the spread of fake news would be an understatement. Even US President Barack Obama has weighed in, adding to the huge amount of pressure being put on The Social Network to come up with a solution - even if fake news itself is not the key problem (echo chambers are, but that's another story).
And despite initially playing down the influence of fake news, Facebook has been forced to act. And today, The Social Network has revealed four key measures it's implementing to limit the reach of misinformation and fake content - and there could be some side effects.
Here's what's been announced.
1. Easier reporting
Users have been able to flag stories on Facebook as false since January last year, when Facebook first released an option to tag posts as 'false news' (which shows that they've been aware of the problem for some time).
The problem is that with all the various reporting options that have come up since, it's become harder to even find the right box to tick when reporting hoax content. Facebook's fixing this, making the "it's a fake news story" a more prominent reporting option, which should lead to more reports of such content.
Obviously this is not a solution, particularly given the option has existed for some time, but making it easier to report fake content, and making sure more users are aware they can do so, is a positive step. A small one, granted, but it adds to the wider process.
2. Flagging stories as disputed
This one could be a lot more significant - Facebook's also introducing a new third-party fact-checking process to flag disputed news content.
As explained by News Feed VP Adam Mosseri:
"We've started a program to work with third-party fact checking organizations that are signatories of Poynter's International Fact Checking Code of Principles. We'll use the reports from our community, along with other signals, to send stories to these organizations. If the fact checking organizations identify a story as fake, it will get flagged as disputed and there will be a link to the corresponding article explaining why. Stories that have been disputed may also appear lower in News Feed."
But more than that, if a user goes to share one of these stories, they'll now get a prompt which highlights that the accuracy of the report has been questioned.
This could be a significant step - no one wants to share content that makes them look stupid, which fake news obviously does. The process will obviously raise the ire of some who'll want to dispute the fact that the article has been disputed in the first place, but the majority of shares of false news likely come from people who have no other reference point, no other way of knowing whether the item is right or wrong.
This process - if effective - could greatly improve that, giving users more context and halting them from sharing blatantly fake items.
"Once a story is flagged, it can't be made into an ad and promoted, either."
It's impossible to know how effective it will actually be till it's in practice, but this seems like a great move, and one which could have a significant impact.
3. Informed sharing
This measure could have side effects, so it's important Facebook marketers take note.
As explained by Mosseri:
"We've found that if reading an article makes people significantly less likely to share it, that may be a sign that a story has misled people in some way. We're going to test incorporating this signal into ranking, specifically for articles that are outliers, where people who read the article are significantly less likely to share it."
So if your posts aren't generating shares, that could see their reach reduced. Technically, this already happens within the algorithm process - content is distributed based on engagement, which includes Likes and shares. But it'll be worth keeping tabs on this to see if there are any additional impacts - if your posts are reaching a lot of people but generating no shares, this update could make it even harder to gain traction.
4. Disrupting financial incentives for spammers
And the last of Facebook's new anti-hoax measures take aim at the originators of such content.
Facebook says they're working to eliminate 'spoof domains', Pages which mimic well-known news news organizations in an effort to dupe readers. This is largely how teens in Macedonia were able to make money from fake news during the US Presidential Election, sharing content from domains that sounded somewhat official.
In addition, Facebook's also "analyzing publisher sites to detect where policy enforcement actions might be necessary". In other words, Facebook's also going to take manual action against Pages which are peddling spam links purely for clicks.
These are positive measures for Facebook, and while they won't completely eliminate fake news - and no doubt plenty of others will have ideas and suggestions on how they can improve it - if they work as intended, these measures could see a significant reduction in the spread of fake content, while also enabling people to continue sharing as normal, without Facebook over-stepping the bounds of intervention or censorship.
As noted, there are other concerns to address - a recent paper published on pbs.org highlighted that "the algorithmic bias toward engagement over truth reinforces our social and cognitive biases", which is a greater problem in the scheme of things, that we share content which supports our own view and block out dissenting opinion, blinding us from contrary facts. That leads to much more siloed news consumption, but correcting that would be in counter to the way in which social networks generate engagement, and would be moving more into the territory of censorship, essentially making Facebook a media company - a tag it's actively trying to avoid.
But these updates show that Facebook is responding to the call for action, they are taking steps to eliminate fake and misleading news from the conversation, which can only be a positive for wider political discourse and debate.