After a flood of reports following the US Presidential Election, and despite initially playing down the potential impact, Facebook CEO Mark Zuckerberg has outlined the platform's plan to tackle the issue of fake news and how it proliferates across The Social Network.
In the 500+ word post, Zuckerberg reiterates that while the percentage of misinformation on the platform is "relatively small", it is an issue that they're taking seriously, and have been working to combat for some time.
Zuckerberg also outlined their seven step plan for combating the spread of fake news, which may also have various implications.
Here's Facebook's seven-point strategy, along with some of the potential impacts.
1. Stronger detection
The first step, says Zuckerberg, is to improve their ability to classify misinformation.
"This means better technical systems to detect what people will flag as false before they do it themselves."
This measure can be problematic - you may recall, Facebook once tried out a satire tag to help users who had trouble distinguishing between what was a real news story and what as a parody - you can see it in the "Related Articles" listing in this image.
(Image via Ars Technica)
They then expanded that with a "hoax" tag which relies on user reporting and adds a notification to stories which have been flagged by a number of users.
But that doesn't necessarily help if the story isn't satire (which, Facebook says, tend not to be reported by users) and users are unable to ascertain it's accuracy - if few users are reporting it as a hoax, it doesn't get flagged. And even in the case of a story actually being flagged, that small notification can easily be ignored - many of the reports about Donald Trump, for example, were dismissed by his supporters, true or not, because the Trump camp worked to frame them as an attempt by the powers that be to silence his voice.
In order to counter this, Facebook will likely be looking to identify regular publishers of false news and restrict the reach of their content, which is bad news for those peddlers of spam designed to purely to attract clicks to generate AdSense dollars.
Such practice is more prevalent than you might think - a report released earlier this week outlined how fake news writer Paul Horner makes $10,000 per month publishing content which he knows people will click on as it supports their pre-existing beliefs or otherwise triggers their emotions.
There were also reports of Macedonian teenagers making money during the election campaign by posting false, mostly pro-Trump content in order to generate clicks.
As outlined by BuzzFeed:
"The young Macedonians who run these sites say they don't care about Donald Trump. They are responding to straightforward economic incentives: As Facebook regularly reveals in earnings reports, a US Facebook user is worth about four times a user outside the US. The fraction-of-a-penny-per-click of US display advertising - a declining market for American publishers - goes a long way in Veles. Several teens and young men who run these sites told BuzzFeed News that they learned the best way to generate traffic is to get their politics stories to spread on Facebook - and the best way to generate shares on Facebook is to publish sensationalist and often false content that caters to Trump supporters."
Given this, Facebook will likely focus their efforts on cracking down on such schemes, similar to how Google has had to continually refine their algorithms to combat black hat SEO tactics. How, exactly, they'll go about this it's difficult to say, but Facebook employs some of the smartest engineers in the world - now that they have a definitive target, they'll no doubt establish some solution to reduce the reach an impact of such content.
2. Easier reporting
Zuckerberg also says Facebook will look to simplify their reporting process - similar to the one outlined for hoax stories - which will enable them to catch fake reports before they gain traction.
As noted above, there are systems in place for this purpose already, so it's difficult to say whether improving this process will be effective.
Putting more emphasis on user reporting could also have negative side effects - people could report real providers as part of a political tactic to suppress stories, for example.
User reporting is difficult to rely on, especially in the case of people not being able to distinguish between real and fake. I don't expect this measure will end up having a significant impact.
3. Third party verification
As noted by Zuckerberg:
"There are many respected fact checking organizations and, while we have reached out to some, we plan to learn from many more."
This sounds similar to a suggestion offered by TechCrunch writer Josh Constine:
Rather than be the truth police case by case, Facebook will seek outside truth scoring. From my article I had scheduled to publish tomorrow: pic.twitter.com/bmq1BnDBp9- Josh Constine (@JoshConstine) November 19, 2016
Essentially, Facebook could add an additional layer of quality control over highly shared news content, or even providers, which could then de-emphasize their reach in the News Feed or, as noted by Constine, flag that content to boost reader awareness.
This is a positive move by Facebook, incorporating those organizations that have been actively fighting to reduce the spread of fake news, as it expands Facebook's knowledge pool on the issue and will no doubt lead to better detection and alert systems.
Zuckerberg says that they're "exploring labeling stories that have been flagged as false by third parties or our community, and showing warnings when people read or share them."
Again, this sounds similar to their hoax labels - relying on audience reporting has clearly not been effective thus far, and I do have concerns with them boosting the emphasis on such reports to flag content. But the incorporation of third-party verification could be a more effective approach.
For example, if something has been flagged by users, a trusted third party could then be called upon to verify, improving accuracy and minimizing the impact of people reporting true news content.
5. Related articles quality
This one is a big concern and something that definitely needs to be addressed.
As noted by various reports, one of the key issues with Facebook is the echo chamber - once Facebook has tagged you as supporting a certain side of politics, for example, the algorithm will feed you more and more content that's also been liked by people who lean that way.
For example, this:
@mathewi @natts don't know if this counts as full proof evidence but it's what I experienced today. pic.twitter.com/RKW1frajgJ- Buzz Bishop (@buzzbishop) November 14, 2016
You can see how the algorithm - whether by design or not - can reinforce political preference by showing you more content read or liked by people of the same view. In this case, the original article would have been shared by a lot of Trump supporters, which can then lead the reader down a rabbit hole of one-sided, and often fake, reports.
This, as I've noted before, is a bigger concern than fake news itself - it's how the algorithm reinforces and fuels such beliefs that creates the real issue.
This is a difficult problem for Facebook to address as they make their money by having people spend more time on site. They do this by giving them more of the type of content they want to read, so it's not in Facebook's interests to show them a post that might counter their opinion (and that they'll never click on).
In this sense, working out how "Related Articles" are selected and distributed could be a small step towards creating more balance in the News Feed overall. But that's a big "could". We'll need to wait for more information on this front.
6. Disrupting fake news economics
As noted above, another big problem is that fake news sells - and when clicks are currency, it makes perfect sense that some operators will publish false, misleading material purely in order to get more attention.
The online business model incentivizes clicks (as does Facebook's algorithm) and in this scenario, what does it matter if what you're publishing is true or not? Businesses simply need to get people clicking, and as we've seen time and time again, hoaxes and fake reports drive clicks, especially those ones that are within the realm of believability.
I mean, really, even before the advent of the online space, Hollywood gossip magazines took the monetization of untruths to a whole new level, quoting "anonymous sources" and "a close family friend" in stories that proclaimed break-ups, hook-ups and various other 'scandals' that were never anywhere close to reality. And as readership numbers for such content shows, the truth really doesn't matter, so long as the alternative is engaging enough, so long as it provides something people want to read.
In the past week, both Facebook and Google have announced that they're going to stop noted fake news distribution sites from using their ad services, a move which could have a significant impact on the entire online eco-system.
For example, what will that mean for providers like Taboola and Outbrain which many publishers partner with to supplement their income?
The two generate close to half a billion dollars in revenue by providing these 'Recommended' links at the bottom of partner site posts, with many of those stories coming from questionable or non-reputable sources. Could Google look to de-emphasize sites using such services as part of their crackdown?
Could Facebook engagement reduce if the platform stops publishing engaging, but false, content?
And what becomes of those publishers that are banished - we've seen reports along similar lines this week with social network Gab gaining momentum as banned Twitter users migrate across to their platform. Could we see a new underground news network form that subverts the mainstream, but remains popular either way?
It all depends on the extent to which the platforms go about it, but the potential is there for the moves to reduce fake and misleading content to actually fuel more division, as opposed to enabling better understanding.
And Zuckerberg''s last point of focus is 'listening':
"We'll continue to work with journalists and others in the news industry to get their input, in particular, to better understand their fact checking systems and learn from them."
This is pretty much the stock standard qualifier, but Facebook has shown in the past that they are willing to go to efforts to understand all perspectives and work with such groups.
We still have a long way to go on this, and the potential impacts could be significant. It's important for social media marketers to pay attention to what's happening in the space and remain aware of what any changes in policy or News Feed distribution might mean for their campaigns and on-platform performance.
As always, we'll work to keep you updated with all the latest news and information.