The Christchurch terror attacks, in which 51 people were killed, and 50 injured, highlighted a new concern in digital media circles - that digital platforms, and social networks in particular, are fueling major societal divisions and underground movements, which can lead to tragic consequences.
Much of the focus on extremist content in times past has focused on radicalization by established terror groups, but the Christchurch attacks underlined the importance of also noting the rise of localized hate speech, which can lead to another form of radicalization entirely. That shift can at least partly be attributed to social media, and sharing algorithms which show users more of what they like - and are they're likely to agree with - and less of what they won't, which leads to a more skewed, imbalanced perspective.
Add to that the fact the Christchurch attacker live-streamed his actions on Facebook, and the connection between social media and such movements is increasingly clear.
So what can be done to tackle this issue?
This week, representatives from Facebook, Twitter, Microsoft, Google and Amazon attended a meeting in Paris to discuss the next steps they can take to curb the spread of terrorism and extremism online. The result of the meeting, which was hosted by French President Emmanuel Macron and New Zealand Prime Minister Jacinda Ardern, is 'The Christchurch Call', a nine-point strategy which "sets out concrete steps the industry will take to address the abuse of technology to spread terrorist content."
As per the official website:
"The Christchurch Call is a commitment by Governments and tech companies to eliminate terrorist and violent extremist content online. It rests on the conviction that a free, open and secure internet offers extraordinary benefits to society. Respect for freedom of expression is fundamental. However, no one has the right to create and share terrorist and violent extremist content online."
The elements of the Christchurch Call, agreed to by all in attendance, are the following:
- Improved Reporting Processes for Extremist Content - The signees committed to establishing new methods within their platforms and services for users to report or flag inappropriate content. The companies will also seek to ensure that the reporting mechanisms are clear and easy to use, and provide enough categorical detail to help their teams prioritize and act promptly on relevant concerns.
- Improved Technology - The signees have committed to continue to invest in advanced technology to detect and remove extremist content, which includes the continued development of visual recognition tools. Facebook committed an additional $7.5 million to such research earlier this week.
- Dedicated Focus on the Risks of Live-streaming - As noted, the Christchurch attacker used Facebook Live to stream his actions online, highlighting a significant concern with live-streaming capacity in particular. The signees have committed to implementing "enhanced vetting measures (such as streamer ratings or scores, account activity, or validation processes) and moderation of certain live-streaming events where appropriate". Given the nature of live content, this is a difficult area to police, and more action will be required to address potential concerns.
- Transparency Reports - The signees have also committed to publishing regular updates regarding the detection and removal of terrorist or violent extremist content on their platforms and services
In addition to these platform-specific measures, the following four points will be applied more broadly across the companies.
- Shared Technology Development - The signees have committed to working collaboratively across industry, government, educational institutions, and NGOs "to develop a shared understanding of the contexts in which terrorist and violent extremist content is published and to improve technology to detect and remove terrorist and violent extremist content more effectively and efficiently". By working in collaboration, the companies will be able to advance their tools and processes even faster, boosting safety measures.
- Crisis Protocols - The signees will also work collaboratively to create a protocol for responding to emerging or active events on an urgent basis, to ensure that relevant information can be shared quickly, and acted upon by all stakeholders with minimal delay.
- Education - The signees will also work collaboratively across industry, government, educational institutions, and NGOs "to help understand and educate the public about terrorist and extremist violent content online". The process will include educating and reminding users about how to report or otherwise not contribute to the spread of extremist content.
- Combatting Hate and Bigotry - Lastly, the signees have committed to working collaboratively to provide greater support for relevant research in order to detect and address the root causes of extremism and hate. This is a much larger goal, obviously, but with the combined resources of the tech giants, significant advances are possible, which could be a huge step.
There's no doubt that there's been a rise in online hate, and a decline in civic discourse in recent times - which, as noted, has to be at least partly attributed to the sharing algorithms used by social platforms to boost engagement, which also facilitate filter bubbles, leading to increasingly skewed perspectives. That's the element that needs to be most urgently addressed - how do social platforms change their sharing systems to reduce the focus on content which spurs division?
For example, Facebook rewards posts which see more engagement (i.e. comments, shares, Reactions) with increased reach, boosting their distribution and exposure among the platform's 2.38 billion users.
Which post is going to inspire more comments - an article with a headline that reads 'Experts Agree that Urgent Action is Required on Climate Change' or one which says 'Climate Change is a Myth Designed to Make Money for Global Corporations'?
Facebook's very algorithm incentivizes sensationalism - publishers who play Facebook's game win out by publishing more divisive, extreme viewpoints. Add to that the fact that once you've shown what content you're interested in, Facebook will show you more of the same, in order to keep you engaged, and it's clear that Facebook's algorithm is at least partly to blame for such divides, and the subsequent algorithms implemented on other platforms follow very similar process.
This is a key element that needs to be addressed, which is not specifically mentioned in the Christchurch Call. The fact that the companies have committed to taking action is a positive step, but it's the very systems that have made social platforms increasingly engaging which are also part of the cause for these associated concerns.