Facebook is taking a new approach to help keep users better informed within the app by sending out specific notifications to people who've engaged with posts which have later been identified as including misinformation, with an initial focus on COVID-19 updates.
As reported by Fast Company:
"[Facebook] will now send notifications to anyone who has liked, commented, or shared a piece of misinformation that’s been taken down for violating the platform’s terms of service. It will then connect users with trustworthy sources in effort to correct the record."
As you can see in these example screenshots, the new notifications will include more specific wording to help users understand the purpose of the notification:
“We removed a post you liked that had false, potentially harmful information about COVID-19.”
The notifications also include details on the removal, and an explanation of why the content was removed.
The new approach comes after recent research found that Facebook's current process for labeling misinformation isn't always working as intended.
As reported by Platformer:
"For the interview study, eight of 15 participants said that platforms have a responsibility to label misinformation and we're glad to see it. The remaining seven took a hostile attitude towards labeling, viewing the practice as "judgemental, paternalistic and against the platform ethos."
Indeed, one study participant noted that:
"I thought the manipulated media label meant to tell me the media is manipulating me."
Questions around censorship and manipulation by the media in general have been fueled by US President Donald Trump, who has repeatedly labeled anything critical of his administration as 'fake news', and tagged reporters as 'lamestream' media, who are peddling lies for their own benefit. That narrative has prompted more people to question all news stories they see, which is partly why Facebook's labels are seen by many as another exercise in control, as opposed to being informational updates.
The new labels likely won't counter this, but they will provide more specific context to each user, which could get more clicking through and re-thinking their sharing habits. Previous research has shown that flagging false news does have an impact on subsequent distribution.
But then again, it could also have unwanted side-effects.
Another study released earlier this year found that when only some news posts are labeled as fact-checked and disputed, users then tend to believe that any other stories which are not marked are definitely correct - even if they're completely false.
As per the report:
"In Study, we find that while warnings do lead to a modest reduction in perceived accuracy of false headlines relative to a control condition (particularly for politically concordant headlines), we also observed the hypothesized implied truth effect: the presence of warnings caused untagged headlines to be seen as more accurate than in the control. In Study, we find the same effects in the context of decisions about which headlines to consider sharing on social media."
So the labels do reduce shares of fact-checked, identified as untrue reports, but they can also, unwittingly, increase the credibility of other false reports if they haven't been checked. Now, Facebook will be looking provide more specific, targeted notifications on the same, which could, in theory, provide a similar false sense of security, in that users will then assume Facebook will notify them of all fake reports, and be less inclined to research for themselves.
It's a difficult balance, but with vaccines now being rolled out around the world, Facebook is doing what it can to quell misinformation about the virus, and ensure optimal take-up of mitigation efforts.
Definitely, more targeted, specific prompts could be valuable, but wide-scale research will need to be conducted to show the full impact.