After another major incident of racist abuse online, targeted at members of the English football team, Facebook has provided a new overview of how it's working to address such attacks, and stop people from experiencing race-based abuse across its platforms.
Following England's loss in the European Championship final on Sunday, social media trolls posted hundreds of racist remarks, attacking the three Black players on England’s soccer team, which included the use of emojis as an abuse element. Which Instagram's systems didn't initially pick up a concern - but now, on review, Facebook has updated its systems, and teams, to ensure that it addresses similar incidents moving forward, while it also looks to improve its processes overall.
As explained by Facebook:
"We are appalled at the abhorrent racist abuse some members of England’s football team experienced after the Euro 2020 final last weekend. This is an incredibly serious issue that we’ve been working on for years, which has included working directly with football organizations and law enforcement."
Indeed, even this year, Facebook has been faced with serious challenges on this front, again tied to UK football fans.
Back in February, Instagram unwittingly became the source of various incidents of race-based attacks against players from Manchester United, Chelsea and Liverpool, among others, who were targeted via Instagram Direct. Manchester United, in a joint statement with Everton, Liverpool and Manchester City, condemned the incidents, and called on Instagram's parent company Facebook to do more to protect users from such, which lead to Instagram implementing tougher penalties for those found to be sending abuse via DM, and a new option for personal accounts to switch off DMs from people that they don’t follow.
Clearly, however, there are still more issues to addess on this. And while racism is a societal issue, and not confined to social platforms as such, Facebook needs to ensure that it doesn't provide any amplification for the same, in order to do its part to reduce its impacts.
Facebook clearly states that:
"We don’t allow attacks on people based on their protected characteristics, which include race, religion, nationality or sexual orientation. If we’re made aware of any words or emojis being used to attack people based on their race, we remove them because they violate our policies. We publish our hate speech policies in our Community Standards and Instagram’s Community Guidelines."
The problem in this latest incident, as noted, was that the use of emojis as a racist marker wasn't initially identified, which Instagram chief Adam Mosseri has acknowledged.
We have technology to try and prioritize reports, and we were mistakenly marking some of these as benign comments, which they are absolutely not. The issue has since been addressed, and the publication has all of this context.— Adam Mosseri ???? (@mosseri) July 14, 2021
Facebook explains that it identifies hate speech by using a combination of artificial intelligence and human review.
"AI helps us prioritize reports for our reviewers and take automated actions where appropriate. Between January and March of 2021, we removed more than 25 million pieces of hate speech content from Facebook - nearly 97% before someone reported it to us. And on Instagram, we took action on 6.3 million pieces of content, 93% before someone reported it to us."
Which is a good result rate, but as this latest incident shows, there are still times that these systems will not be able to catch everything. Which, as Mosseri further notes, is also a challenge of scale.
Because of our scale. We handle millions or reports a day. If we make a mistake on one percent of them, that’s tens of thousands of mistakes. We need to, and will continue to do, better, but there will always be some mistakes.— Adam Mosseri ???? (@mosseri) July 15, 2021
Realistically, there's no way to entirely stamp such out, but Facebook's working to ensure it updates its systems in real-time, as cases are detected, in order to work faster to combat abuse and limit exposure.
"People have been rightly frustrated when they’ve reported posts and have been told incorrectly that hateful comments with certain emojis don’t break our rules. That’s because our AI didn’t understand the context - and that’s a mistake. We’ve moved quickly to correct this through recent improvements to our technology. We will continue to work on this so we can remove violating emojis from our platform quicker."
It's a very difficult balance, and no one has all the answers. Various regulatory groups, for example, have proposed tougher penalties for social platforms that fail to address such in a timely manner. The problem is, what's considered 'timely' will be different on almost a case-by-case basis.
Should Instagram have picked the misuse of emojis up sooner? Yes - but its automated systems didn't identify this as a problem. Because it wasn't, till it was, and no one could have necessarily anticipated such until it was too late.
One possible way to address this could be to impose tougher ID requirements on individuals when they sign up for social media accounts, which would make it easier to identify offenders. If there's a threat of real-world legal recourse for your online actions, that could act as enough of a threat to make users reconsider their actions.
But Facebook says this also isn't necessarily the way forward:
"There are risks with ID verification, primarily the exclusion of groups - particularly disadvantaged groups - who don’t have easy access to official forms of identification. The most recent modeling from the Electoral Commission estimates that 11 million people in the UK do not have a driving license or passport, and that this group were more likely to be from disadvantaged backgrounds."
Which is a valid point, and once again provides some more scope as to the challenge before it, and all online platforms, in this respect.
It's a tough, but important challenge, and one that Facebook is taking very seriously. Ideally, a solution can be found that addresses these key issues, but realistically, there's always going to be a level of misuse, no matter what measures are implemented.
And again, at Facebook's scale, even a small margin for error can mean large impact.
Hopefully, as automated systems evolve, more tools can be enacted to maintain a safe environment for all.