After the recent Christchurch terror attacks, in which 50 people were gunned down in mosques, attention quickly turned to Facebook and the role the platform played, and has played, in spreading messages of race hate, and sowing community division.
A key focal point has been that the shooter broadcast his acts on Facebook Live - according to Facebook, the original broadcast of the attacks was viewed around 4000 times before being removed, while Facebook has subsequently removed around 1.5 million videos of the incident, with more than 1.2 million of those blocked at upload, thereby preventing them from being seen on the network.
But authorities believe Facebook can, and should, do more - in recent days, Facebook has been meeting with Government officials in New Zealand and Australia to discuss what it can do to stop people using Facebook Live for such purpose, and what Facebook can do, more broadly, to minimize race hate.
That's lead to Facebook's latest announcement - a new ban on "praise, support and representation of white nationalism and separatism on Facebook and Instagram", which Facebook will begin enforcing from next week.
As per Facebook:
"Our policies have long prohibited hateful treatment of people based on characteristics such as race, ethnicity or religion – and that has always included white supremacy. We didn’t originally apply the same rationale to expressions of white nationalism and separatism because we were thinking about broader concepts of nationalism and separatism – things like American pride and Basque separatism, which are an important part of people’s identity. But over the past three months, our conversations with members of civil society, and academics who are experts in race relations around the world, have confirmed that white nationalism and separatism cannot be meaningfully separated from white supremacy and organized hate groups."
The announcement will no doubt be criticized by far-right groups, many of whom are now using Facebook as their primary tool for building their support base.
Indeed, there's no shortage of far-right content on the platform - a simple search can uncover a range of borderline memes and posts which are now likely to come under increased scrutiny.

The problem is, this type of content elicits an emotional response, and as various reports have shown, sparking an emotional reaction is key to driving engagement. If you can prompt more people to tap that reaction button, Facebook's algorithm will reward your content with increased reach and exposure. This is the same principle that's driven news outlets to become more sensationalist in their coverage - impartiality is not as compelling, in an emotion-driving sense, as putting forward a definitive viewpoint, which then sparks a reaction from the reader.
That's part of the broader concern about Facebook more generally - through its News Feed algorithm, Facebook has essentially changed the way news is covered, funneling outlets towards different sides of the spectrum in order to maximize audience response, and thus, reach. It's the algorithms themselves that have driven this, at least to some degree, so its interesting now to see Facebook taking a more proactive stance against the same, drawing a bigger line in the sand on what's acceptable and what isn't.
Many will also no doubt claim this to be editorial interference - a tag that Facebook has long avoided - but in the wake of Christchurch, few can argue the potential of Facebook for sparking radicalization. The challenge now is working out how to stop it.
In addition to this, Facebook has also announced a new initiative to connect people who go looking for white supremacist content on its platform with helpful resources, instead of that material.
"As part of today’s announcement, we’ll also start connecting people who search for terms associated with white supremacy to resources focused on helping people leave behind hate groups. People searching for these terms will be directed to Life After Hate, an organization founded by former violent extremists that provides crisis intervention, education, support groups and outreach."

It's hard, at this stage, to predict just how beneficial these tools will be - many private Facebook groups related to white supremacist, and other controversial movements, have already been founded. Will those come under scrutiny? Will Facebook seek to remove content from these enclosed spaces, where members have chosen to take part in those discussions?
This also doesn't address the larger concern that Government officials have raised - that Facebook enabled a murderer to broadcast his acts in real-time, unfiltered and viewable to anyone online at the time. Facebook has resisted making any change to its live-streaming product - which, by its nature, cannot be edited ahead of airing - but it has made the above concessions on white nationalist content.
Is that a real, substantive move, or an effort to appease officials with a level of compromise in the wake of the Christchurch attacks?
Given this, we'll have to wait and see how Facebook enforces these new rules, and what impacts they could have. Definitely, this is an area that Facebook, and all social networks, should address, but to what level that's possible within their current frameworks - and in line with their sharing algorithms - is difficult to say.
Facebook could take more substantive action, it could remove the algorithm altogether, meaning that whoever and whatever you follow on the platform right now is the content you would see, as opposed to giving increased distribution to highly engaged with posts. But Facebook has driven such significant engagement benefit from the algorithm that it wouldn't want to do that, while it would also re-open itself to old methods for gaming the system, like clickbait.
But then again, those schemes are likely less damaging than the divisive content that gets the most engagement now - it would be better for Facebook to build systems to reduce the distribution of annoying listcles than extremist views.
Essentially, this is the call that Facebook needs to make - can it balance its business interests with societal good measures, and how effective would each approach be in reducing such impacts?
This new announcement will definitely help, but I suspect we haven't heard the last of this ongoing debate.