As part of its ongoing efforts to stop its platform being used to manipulate voters, Facebook has announced that it's expanded it's third-party fact-checking program in Europe, adding five new local fact-checking partners ahead of the coming EU elections.
As explained by Facebook:
"Our fact-checking partners are all accredited by the International Fact-Checking Network (IFCN), which applies standards such as non-partisanship and transparency of sources. These partners are also part of a collaborative effort led by IFCN to fact-check content related to the European Parliament elections, called FactCheckEU."
Facebook's approved fact-checking partners review politically relevant content posted on the platform, check the stated facts in each, and rate their relative accuracy. If a fact-checker rates content as 'false', it'll appear lower in News Feeds, while Pages and websites that repeatedly share false news will see their on-platform distribution reduced. In addition, Pages could also lose the ability to monetize and advertise, and be denied the option to be listed as a 'News' Page, lowering credibility.
How effective such efforts have been thus far is unclear - according to various reports, misinformation is still rife on Facebook, with many of the Pages and organizations that have come under scrutiny on this front instead shifting to private groups, enabling more of the same, but essentially 'behind closed doors'. More recently, Facebook's committed to better policing the same within those private group spaces, but given the platform is now used by 2.38 billion people, many of whom also get at least some of their news content there, the scale of the problem is significant. It's hard to know whether Facebook - or anyone - has the capability to stop such discussion outright.
Of course, that's not necessarily what Facebook's trying to do - the aim of these new fact-checkers is to stop election interference specifically, which is a smaller goal within the broader scope of misinformation. That doesn't necessarily mean it'll be any easier, but it is worth noting the actual size of the problem when considering the potential 'solutions'.
A bigger part of the problem may actually be digital literacy, and understanding the changing news landscape. According to a recent study conducted Princeton and New York University, older users are significantly more likely to share nearly false reports on Facebook, versus those users aged under 30.
From the report:
"Conservatives were more likely to share articles from fake news domains, which in 2016 were largely pro-Trump in orientation, than liberals or moderates. We also find a strong age effect, which persists after controlling for partisanship and ideology: On average, users over 65 shared nearly seven times as many articles from fake news domains as the youngest age group."
Nearly seven times. That's a significant concern, and is likely contributing more to the proliferation of fake reports than the mere creation of such content in itself.
The numbers show that older internet users are not as attuned to detecting false reports as those who've grown up in an age of web hoaxes and similar. Younger users are more likely to be skeptical of claims stated online, and to check them when they don't sound right, whereas older users, who were raised with a more confined, controlled set of news inputs, have learned to trust information as its presented. If it looks reputable, it carries a level of implied credibility - because they can't just print anything, right? They can't just say whatever they want.
The data shows that a big part of the misinformation issue is simply instilling that sense of skepticism into more web users. Part of this, too, lies in confirmation bias, that many people will simply trust information which aligns with their established perspective. But the more that can be done to educate users on how to check such claims, the better. And clearly, there's still a way to go on this front.