Facebook has unveiled its latest measure to help stop the spread of fake and misleading content on their platform, with the announcement that Pages who repeatedly share disputed content will be banned from advertising on Facebook outright.
Last December, after Facebook saw a wave of criticism for their role in spreading fake news in the lead-up to the US Presidential Election, The Social Network announced one of their first new measures to curb the spread of such content, partnering with third party fact-checkers to flag questionable stories being shared on the platform.
"If the fact checking organizations identify a story as fake, it will get flagged as disputed and there will be a link to the corresponding article explaining why. Stories that have been disputed may also appear lower in News Feed."
Importantly, and as part of Facebook's effort to eliminate the financial incentives which fuel the fake content machine, any story which is flagged through this process is also immediately made ineligible for promotion.
That helps stops the spread of each specific story, but what Facebook has found since is that some Pages are still using Facebook ads to build their audience, which then enables them to keep distributing fake news via their Page, even if individual stories are ineligible for promotion.
This new measure takes their efforts a step further - now, if a Page is found to be repeatedly sharing disputed content, they won't be able to run any promotions, with their Facebook ad access revoked entirely.
What, exactly, 'repeatedly' means in this context, Facebook won't say. And that makes sense - if they put a specific number on it, scammers will still try to game the system.
That also means that if a Page was to somehow unwittingly share a disputed post, they would still be fine - as Facebook notes "it's not a single instance, it's a repeated pattern of misinformation". Of course, you should always question the content you're sharing, especially on your Page/s - if something seems dubious, you probably want to avoid it, and definitely, if you get a warning about an article's validity based on Facebook's fact-checking process, that'd be a no-no to share, regardless of your personal take.
As noted, this is the latest in a range of updates Facebook has made to reduce the spread of fake and misleading content on their platform.
Among their other additions, they've:
- Updated the News Feed algorithm to reduce the reach of posts which are read but not shared at relative rates
- Tested out a new process which displays related articles from alternate sources on questionable links
- Made it easier to report 'false news' direct from the News Feed link
More recently, Facebook announced that they're adding publisher logos to links in Trending and Search, which is more geared towards helping publishers build brand recognition on the platform, but may also serve to help users distinguish between reputable and not-so reliable updates.
Eliminating the financial motivations for people to produce misleading content is key - as was reported by various outlets following the US Election, given Facebook's algorithm incentivizes engagement, and amplifies content based on it, it can be beneficial for publishers to create divisive, sensationalized content, as that's what gets clicks, that's what sparks discussion.
This was best exemplified by the story of how groups of Macedonian teenagers were generating significant income by creating false news outlets - if people are more inclined to share sensationalized news, and that sharing then further exacerbates its spread, and thus, clicks to the distributing website, there's money to be made in fake news.
Eliminating it completely will be difficult - and the construction of Facebook's almighty algorithm is somewhat reliant on this very process, but The Social Network's combined efforts should help reduce the reach and effectiveness of such content, with each measure adding more weight to their push.