Following on from its recent announcement that its image recognition technology is now able to detect text within photos, Facebook has announced that it's also now using new systems to pass on suspicious content within images and videos to fact-checkers, expanding its fact-checking program.
Facebook first implemented its third-party fact-checking process in 2016, but thus far, that initiative has only focused on articles with misleading content. The problem with that is that misinformation is also increasingly being shared in images and video - and with many users forming opinions based on headlines, and indeed, images in isolation, that can be just as damaging.
Underlining this, Facebook has provided these examples of the types of misinformation being shared via image content:

No doubt you've seen other variations of the same - here's one that was shared on Reddit recently (not on Facebook but the same are often shared on The Social Network).

People share such content because of its emotional pull, often without checking the detail. I mean, it looks right, the image seemingly presents a plausible case, but that misrepresentation can be damaging, and can solidify conspiracy theories and incorrect assumptions which can help fuel damaging movements.
This is particularly relevant on Facebook because the News Feed algorithm rewards engagement, and people are more likely to comment on a controversial post than they are to share a positive one. That inclination towards voicing strong opinions is what's helping to fuel societal divides - the above example, for instance, seems harmless, but by falsely representing an image of a girl and her soldier father, that would no doubt solidify patriotism and support for the armed forces. If a user were to see this, then subsequently see a post about, say, Colin Kaepernick refusing to stand for the national anthem, no doubt their response would be stronger.
Even in the seemingly smaller details, inaccuracy can skew broader perspective, and while they're easily debunked by a simple reverse image search (as highlighted in Facebook's video at the top of this post), people won't do that. If it looks plausible, and it supports their established perspective, some users will just believe it. Because why wouldn't they? And older generations, in particular, don't have the same skepticism of what they see online that younger users do.
And that's not an ageist sentiment - research shows that users in older age brackets are simply not as aware of social media processes and practices. For example, how Facebook's algorithm works to show you more of what you like and agree with, and less of what you don't.

If you didn't realize this, it makes sense that you would think the information you're seeing on Facebook is representative of the truth, and with 68% of Americans getting at least some of their news coverage on the platform, you can see how such misunderstanding could further fuel division.
One argument is that older generations have been raised on trusting what they're shown by the media - what's presented in the news has, traditionally, had to be true, had to be based in fact, and that's what we've come to believe. The advent of digital platforms has changed this somewhat, but when such belief is embedded within you, that you should trust what you read from seemingly reputable outlets, it can be hard to shake.
Again, that's further solidified by images and videos - its not just the words, they have visuals to support their case. So it must be at least somewhat true. Right?

You can see how important this new initiative from Facebook is, and how much more important its likely to become as advanced systems enable people to create, for example, fake statements from celebrities which people will misconstrue.
As such, this is an important step for Facebook.
In working to detect such content, Facebook says it will use signals like feedback from the community, comments on each post (which may include phrases that indicate readers don’t believe the content is true) and whether the Pages sharing such content have a history of sharing things which have been rated false by fact-checkers.
Given this, the process is still heavily reliant on user reports. So if you see something that's fake, report it and save others from potentially falling for the content.
The announcement comes as Facebook CEO Mark Zuckerberg has also outlined his thoughts on the evolution of Facebook as a source of information, and what they're doing to protect such, particularly in the case of elections.
"One of the important lessons I’ve learned is that when you build services that connect billions of people across countries and cultures, you're going to see all of the good humanity is capable of, and you're also going to see people try to abuse those services in every way possible."
Facebook can no longer assume good intentions in users. It now knows this. Measures like these will hopefully help to reverse previous damage done, and facilitate societal connection - as opposed to the opposite.