Facebook has repeatedly stated that it’s not a media company, and has sought to maintain freedom of expression for users on the platform by taking a “hands off” approach to what’s posted, where possible. And while The Social Network has always maintained community guidelines, outlining limitations on what will be tolerated, as the platform has continued to grow, more and more content has fallen into the gray area in between.
Some content may not necessarily be against community guidelines, but it can be hugely damaging for the people involved. In such cases, is Facebook obligated to step in?
This is the situation in Myanmar, where ethnic tensions have lead to horrific incidents of violence, with Muslim Rohingya residents subject to what the UN has described as ‘genocidal intent’ by the Myanmar military. People have been calling on Facebook, in particular, to take action on related content for years, with the Myanmar military using the platform as its main channel of public communication. And while Facebook has sought to ramp up local language content reviewers, and take down the accounts of individuals and groups that generate hate speech, its perceived inaction has been widely criticized.
But this week, Facebook has taken action – in a blog post, Facebook says that it has removed 18 Facebook accounts, one Instagram account and 52 Facebook Pages, collectively followed by almost 12 million people, which have been used to share misinformation related to the Rohingya genocide.
“Specifically, we are banning 20 individuals and organizations from Facebook in Myanmar — including Senior General Min Aung Hlaing, commander-in-chief of the armed forces, and the military’s Myawady television network. International experts, most recently in a report by the UN Human Rights Council-authorized Fact-Finding Mission on Myanmar, have found evidence that many of these individuals and organizations committed or enabled serious human rights abuses in the country. And we want to prevent them from using our service to further inflame ethnic and religious tensions.”
As alluded to by Facebook, the move comes after United Nations investigators detailed their findings that the Myanmar military carried out mass killings and gang rapes of Muslim Rohingya with “genocidal intent”.
And while Facebook is now taking action, the move once again underlines the negative impacts the platform can have. While western audiences are less exposed to the incidents in Myanmar, as noted, Facebook has been under pressure for years to take action on such content, and they’ve failed to act. That hesitancy has seemingly been due to the potential cross over into broader content censorship - even in such incidents where direct links between the platform and such actions have been present, Facebook has not been willing to push ahead, at least, not without related authority from a body like the UN.
“While we were too slow to act, we’re now making progress – with better technology to identify hate speech, improved reporting tools, and more people to review content.”
It’s undoubtedly a positive to see Facebook taking action on this front, but it once again raises questions about the significant of the role the platform plays in modern communications, and the responsibilities it has to monitor, and indeed, censor hate speech – and where it draws the line on such content. Facebook is going to more effort on this front, it is making progress, but the incidents in Myanmar, and role The Social Network has played in them, raise even more questions about whether the platform should come under alternate, and broader regulation.
Can a private company be trusted when it’s in such a position of power? It’s a new reality that we have to examine, and analyze to best understand how we utilize the platform to best effect.