The Christchurch attacks have sent a new wave of terror through the social media community, most notably due to the fact that the attacker used Facebook Live to broadcast his crimes, which then saw the footage spread through various networks.
The attackers' aim was obviously to achieve a level of notoriety, of fame, and the lingering concern is that it may spark further, similar incidents, which casts live-streaming options, in particular, in a negative light.
By its nature, live content cannot be reviewed before broadcast - it's real-time, in-the-moment. Given the way in which live-streaming works, it's impossible for the platforms to maintain a level of control over what's broadcast.
And this is far from the first incident of concern - back in 2016, a woman used Periscope to live-stream her own suicide, in 2015, a former employee of WDBJ7 gunned down a former colleague during a live cross, before uploading footage of the attack online, while last year, a Louisana woman was murdered by her boyfriend while broadcasting on Facebook Live. Live-streaming has become a hugely popular social media function - but the question needs to be asked, is its value and contribution to our interactive landscape worth the potential risk of misuse, and exposure to such concerning material?
This week, Facebook has been meeting with government officials in New Zealand to discuss their response to the Christchurch attacks, and among them is a new proposal which wouldn't remove live-streaming as an option entirely, but it would restrict its use by certain people.
As explained by Facebook COO Sheryl Sandberg:
"We are exploring restrictions on who can go Live depending on factors such as prior Community Standard violations."
That would mean that some users who have previously been reported for concerning behavior would no longer have the ability to go live.
The proposal suggests two things - one, that Facebook is able to implement restrictions on live-streaming based on specific factors, which would enable the platform to remove, or roll back the option, to some degree. That could be where Facebook ends up - ideally, the company wouldn't want to remove live-streaming as a function, as it would also reduce engagement potential, but it is interesting to note that Facebook is exploring ways to take it away from certain users.
Whether that would have helped in the case of Christchurch is not clear - the attacker did have a long history of activity in concerning online communities, but there's no indication that he had been previously reported to Facebook for the same.
Second, the proposal shows that Facebook does realize the impacts of live-streaming in such incidents, and the damage it can cause. Facebook is normally seen as largely blind to such harm - or at least, willing to turn a blind eye. The fact that Facebook is looking at removing the option outright is a positive, but really, the only true way to stop Facebook Live being used for such purpose is to take it away, from all users. Again, Facebook would not want to do that, but logically, given the growing track record of incidents, it could still be where we end up.
In addition to this, Facebook is also looking to improve its AI identification tools to help it detect and remove such content faster. Facebook has previously reported that within the first 24 hours after the Christchurch attack, it was able to remove around 1.5 million videos of the incident, with more than 1.2 million of those being blocked at upload, meaning no one saw them. That's an impressive result, but Sandberg says that they are looking to do better:
"While the original New Zealand attack video was shared Live, we know that this video spread mainly through people re-sharing it and re-editing it to make it harder for our systems to block it; we have identified more than 900 different videos showing portions of those horrifying 17 minutes. People with bad intentions will always try to get around our security measures. That’s why we must work to continually stay ahead. In the past week, we have also made changes to our review process to help us improve our response time to videos like this in the future."
Facebook has also recently announced that white supremacist content will be banned from its platforms, another measure designed to stamp out race hate, and stop its network being used to spread such material. The company has also recently taken the same stance with anti-vax content, showing that it is now looking to draw a more definitive line in the sand in regards to concerning movements.
For a long time, Facebook has approached such content in a 'hands-off' way, preferring to be merely the hosts of the party, not the arbitrators of what's discussed. Facebook has long held that it's not a media company, that it does not make editorial judgment, but the sheer size and influence of its network now leaves it with little choice but to take action.
Given the perpetrator's social media usage, the Christchurch attacks serve as the most significant example thus far of how social platforms can play a part in spreading dangerous content.
The fact that that sentence comes with a 'thus far' qualifier only underlines the level of concern here. Hopefully, the lessons are actioned before it's too late.