Facebook, Twitter, Google and Microsoft will now recognize a range of white supremacist groups and far-right militias as official terrorist organizations, as part of an expansion of their efforts to combat extremism and dangerous hate speech. It's the first time that their collective efforts have been broadened to include domestic groups.
As part of a new update to the Global Internet Forum to Counter Terrorism (GIFCT) database, which lists groups of dangerous organizations of focus for each platform, various domestic collectives will now come under increased scrutiny.
As reported by Reuters:
"Until now, the GIFCT database has focused on videos and images from terrorist groups on a United Nations list [...] Over the next few months, the group will add attacker manifestos - often shared by sympathizers after white supremacist violence - and other publications and links flagged by UN initiative Tech Against Terrorism. It will use lists from intelligence-sharing group Five Eyes, adding URLs and PDFs from more groups, including the Proud Boys, the Three Percenters and neo-Nazis."
It's worth noting that many of these organizations have already been banned or restricted by the major platforms, with Twitter, Facebook and YouTube all taking steps to limit the reach of various US-based organizations over the past two years.
That saw an increase in focus earlier this year, in the wake of the Capitol Riots, but even before that, the major platforms had recognized the potential threat posed by local groups like The Proud Boys, and how they can utilize social media networks to recruit and amplify their agenda.
But the Capitol Riots were the final straw, posing an imminent threat of large-scale political-based violence. Of course, Facebook, in particular, has been accused of sparking similar uprisings in other regions, with varying degrees of accountability and recourse. But localized incidents will logically get more focus, and with Facebook also being identified as a key facilitator of far-right extremism over the past few years, it clearly needs to do more on this front.
So will this help to improve the situation?
It's impossible to say, of course, as various harmful, dangerous movements gain traction on social platforms due to their controversial nature, which then sparks more user response and discussion, and amplifies the same to even more users. That, in part, is a failure of social platform algorithms, which aim to amplify content that sparks engagement, and keeps users in each app, but it's also a human nature problem, in that more shocking, more sensational, more emotionally charged stories and posts are always going to attract more attention.
People love to get that endorphin rush of Likes and comments on their posts, and the best way to spark such is by pushing the boundaries. Bland updates won't get much attention, but taking a controversial stance can amplify your voice, and with everyone looking to be heard amid the expanding sea of social media voices, it's no real surprise that more extreme viewpoints have been able to gain traction online.
Stamping out the broader organizations behind such should have some impact - and definitely, they should not be allowed to proliferate, as we've now seen where that can lead. But the broader issues of online extremism still remain a key issue that will require further, ongoing examination to address.