Facebook has certainly changed its tune on broader web content regulation in recent times.
Back in 2017, in the wake of the Cambridge Analytica scandal and various other concerns, Facebook, Google, Amazon and Apple all increased their spending on lobbyists in order to combat proposals from US senators in regards to increased internet regulation, preferring instead to maintain their own regulatory systems and govern their own processes as they aw fit.
Facebook CEO Mark Zuckerberg softened on this in 2018, telling CNN that: "I actually am not sure we shouldn’t be regulated", in response to a query around transparency. Then last year, Zuckerberg published an opinion piece in The Washington Post, in which he called for "new rules" for the internet, and outlined the specific case for more definitive government regulation.
As per Zuckerberg:
"I believe we need a more active role for governments and regulators. By updating the rules for the Internet, we can preserve what’s best about it - the freedom for people to express themselves and for entrepreneurs to build new things - while also protecting society from broader harms."
So, Facebook went from" "we don't want any more government interference" to: "we need government oversight to implement universal safety controls, and ensure there's a level playing field for all platforms working to police web content".
It's a significant shift, as noted, but given the various legal challenges and official policies being put in place around the world in order to hold Facebook, and other online providers, more accountable for the content that they host - and the additional costs that such rules are adding to Facebook's bottom line - it makes sense that the company would want to clarify a more defined baseline, while also putting the onus on a third party to dictate what's acceptable and what's not, shifting the blame for such decisions away from its team.
Now Facebook's taken the next step on this front - this week, The Social Network has published a new whitepaper which outlines the key questions that need to be addressed in order to implement universal content regulation for web entities, and build a strategic framework for such rules moving forward.
As noted in the paper:
"Since the invention of the printing press, developments in communications technology have always been met with calls for state action. In the case of internet companies, however, it is uniquely challenging to develop regulation to ensure accountability."
The whitepaper aims to address four key questions which Facebook says underpin the broader debate:
- How can content regulation best achieve the goal of reducing harmful speech while preserving free expression? By requiring systems such as user-friendly channels for reporting content or external oversight of policies or enforcement decisions, and by requiring procedures such as periodic public reporting of enforcement data, regulation could provide governments and individuals the information they need to accurately judge social media companies’ efforts.
- How can regulations enhance the accountability of internet platforms? Regulators could consider certain requirements for companies, such as publishing their content standards, consulting with stakeholders when making significant changes to standards, or creating a channel for users to appeal a company’s content removal or non-removal decision.
- Should regulation require internet companies to meet certain performance targets? Companies could be incentivized to meet specific targets such as keeping the prevalence of violating content below some agreed threshold.
- Should regulation define which “harmful content” should be prohibited on the internet? Laws restricting speech are generally implemented by law enforcement officials and the courts. Internet content moderation is fundamentally different. Governments should create rules to address this complexity — that recognize user preferences and the variation among internet services, can be enforced at scale, and allow for flexibility across language, trends and context.
Essentially, Facebook's saying that there are content regulations in place for all other forms of media, and similar should be in place for web entities - which would lessen the burden on Facebook, and other platforms, to decide for themselves what is and is not acceptable, while also instituting a baseline measure across all social networks and entities.
This is important, because Facebook knows that it runs the risk of losing users to other platforms if it goes too far with its content regulations. If, for example, Facebook were to implement tougher rules on hate speech, maybe that would prompt users to switch across to another platform with more lax regulation - but if the rules were not defined by Facebook itself, and all platforms needed to abide by the same requirements, that concern is diminished.
Free speech has long been a key tenet of social platforms, with Twitter and Reddit, in particular, reiterating their commitments to free expression over time. But as we've witnessed more recently, the capacity for social platforms to become hosting grounds for more controversial content, and the subsequent movements and challenges that such can create, can be particularly problematic. This is especially true at Facebook's scale - with 2.5 billion monthly active users, the capability for such messages to be amplified is significant. It makes sense for Facebook to want to establish more universal rules around what is and is not acceptable.
We're still a long way from the next level of web content regulation - and Facebook notes that it plans to publish "similar papers on elections and privacy in the coming months". But it's an important discussion to have, and it will be an important debate to keep tabs on as discussions continue around the potential of more universal web content rules, in various forms.
You can read Facebook's full "Charting a Way Forward" whitepaper here.