While Facebook is still in the midst of an advertiser boycott over its perceived inaction to address hate speech on its platform, Twitter is now also facing a similar protest, with a group of celebrity users in the UK going silent on the platform for 48-hours due to the platform's perceived inaction to address recent anti-Semitic comments.
On Friday, UK rapper Wiley posted a series of tweets which referenced conspiracy theories about Jewish people. The offensive comments have since been removed, and Wiley has been suspended from the platform - but with close to 500,000 followers, much of the damage had already been done, and critics have said that Twitter failed to act on the posts in a timely manner, particularly given their highly public nature.
That's subsequently lead to the 48-hour protest action, with a range of big names announcing their support for the push.
We’re in, are you? Retweet if you are.— Jewish News (@JewishNewsUK) July 26, 2020
Join us, anti-racism campaigners and other public figures on a mass walkout from Twitter for 48 hours starting tomorrow at 9:00am.#NoSafeSpaceForJewHate pic.twitter.com/SM9Y1wg8C6
The UK Government has also shared its concern - British Home Secretary Priti Patel said that "social media companies must act much faster to remove such appalling hatred from their platforms", while UK Prime Minister Boris Johnson has echoed those remarks.
Various governments have been calling on social platforms to do more to address racism and hate in recent years. Last year, New Zealand Prime Minister Jacinda Ardern and French President Emmanuel Macron lead a push for the tech sector to adopt the Christchurch Call, which is a commitment by Governments and tech companies to eliminate terrorist and violent extremist content online. That came on the back of the Christchurch massacre, in which a local terrorist attacked two mosques, killing and injuring over 100 people, while streaming his acts on Facebook Live.
Facebook, Twitter, Microsoft, Google and Amazon all agreed to take more action as a result, but while new regulations have been implemented in order to address such, clearly, gaps still exist.
Facebook has been under pressure in recent months to address hate speech, with comments posted by US President Donald Trump in specific focus. Last month, Facebook CEO Mark Zuckerberg announced that the platform would re-assess its policies as a result of the rising angst, but thus far, the steps outlined by The Social Network have been limited.
It's a difficult line for social platforms to walk - on one hand, they want to facilitate as much engagement as possible, yet on the other, the public's growing reliance on social networks for news and information allocates them more responsibility to restrict what can be shared, in order to limit harm. Who decides what's harmful is where the real challenge lies - and while President Trump has repeatedly accused social networks of conservative bias (despite also acknowledging that they helped to get him elected), the weight of evidence would suggest that the social networks have worked to be as inclusive as possible, while limiting direct threats and hate speech, in most cases at least.
Twitter, it is worth noting, has taken action on Trump's tweets, where he has posted the same remarks as he has on Facebook. Twitter, in fact, has shown that it's more willing than Facebook to address hate speech in all forms, but even so, the size and reach of the platform, and the scale of moderation it's dealing with, will likely always leave it susceptible to incidents like this. And there may not be a lot it can do about it.
Indeed, various governments have also sought to put more regulations on social platforms to ensure that they respond to such incidents in a timely manner - but 'timely' is a relative concept in this application, which makes it difficult to implement legal penalty.
For example, you may think that Twitter should be able to remove any such comments within, say, 30 minutes, but that depends on the reporting systems in place, the staff available to action such (especially during COVID-19) and the approval systems in place. Twitter doesn't have the capacity to monitor every tweet manually, and if the wording in the offending tweet isn't picked up by its automated detection systems, it's largely reliant on user reports to take action.
Basically, there are steps in the process that complicate enforcement, and while Twitter may action the majority of such quickly, it only takes one comment to slip through to cracks for an incident like this to occur.
Which leads to the current protest.
Again, in Twitter's defense, it has improved its automated detection systems, and it is taking more action. The question then seems to be about social media more broadly - even with the best systems in place, some offensive remarks will always slip through. So what then? What can Twitter, or indeed, any social platform, do to address such concerns?
The answer appears to point to tougher action on hate speech, including more censorship, but that also won't please everyone.
It's a difficult problem, but it is good to see the issues being pushed to the forefront through these new protests.