Facebook has published its latest Community Standards Enforcement Report, which covers all the content removals and enforcement actions the platform enacted throughout the first quarter of 2021. It's also shared some new updates on the development of its enforcement systems and processes, and unveiled a new Transparency Center, designed to better clarify its various actions.
First off, on enforcement actions - Facebook has shared a complete overview of its various enforcement efforts, which details its progress on detecting and removing rule-breaking posts, including nudity, hate speech and violent content.
As you can see here, Facebook reports that the prevalence of nudity on both Facebook and Instagram was 0.03-0.04%, which is in line with its last report, while instances of violent and graphic content are down slightly from the previous quarter.
What exactly does 'prevalence' represent in this context is not entirely clear. Facebook can only enforce the content that it finds. Therefore, while it may say that there's only been marginal exposure to users, that's based on what its system discovers, and not what it doesn't. So the numbers could, theoretically, be higher than this - but based on what Facebook's system has found, it is getting better at combating these key elements.
Facebook also says that it's getting better at detecting hate speech, a key concern for the platform.
Online hate speech came into sharp focus earlier this year when supporters of former US President Donald Trump attempted to lead a coup, of sorts, by storming the Capitol Building. Since then, Facebook has been working to reassess its approach to such, which has seen it put more focus on divisive speech and groups.
But there is another side to these figures.
While, as Facebook notes, the hate speech that it's detected is on the decline, there is still a concern with this given Facebook's scale.
According to Facebook:
"In Q1, [the prevalence of hate speech] was 0.05-0.06%, or 5 to 6 views per 10,000 views."
Which is good, and down on the 8 views per 10k that Facebook reported back in February. But Facebook has 1.9 daily billion active users. Let's say that each of these people is viewing 10 posts per day, which would be a low estimate. Even at 0.05% exposure, that would still mean that the platform is facilitating millions of views of hate speech, every single day.
Facebook, of course, can't realistically expect to eradicate all instances of such - but the scope of the potential issue is worth noting. Even if Facebook does really well at detecting and removing these offending posts, it's still facilitating significant distribution of such - and those estimates don't include unreported content in private groups, messages or WhatsApp.
In addition to this, Facebook also notes that, in Q1, it took action on 8.8 million pieces of bullying and harassment content on Facebook, and 5.5 million instances of bullying and harassment on Instagram.
This is a key area of focus for Instagram, and it's good to see those actions increasing as Facebook's detection and enforcement systems continue to improve, and provide more protection for users.
But still, there are some lingering questions about Facebook's metrics.
For example, Facebook also notes that fake accounts still make up approximately 5% of its worldwide monthly active users.
Which is the same figure that it reported in its last update, and the one before that, and the one before that as well. In fact, Facebook has been repeating this same 5% fake profiles number for years, despite, as you can see in this graph, it taking more action on fake accounts over time.
Which suggests that maybe Facebook doesn't really know how many fake accounts are on its platforms, and it's really just taking a guess. Which then casts doubt on all its other figures also.
We don't have any way of double-checking these numbers, as it's all internal research, but it does seem a little strange that, despite its detection systems improving and removing more fake profiles, this reported number has remained static.
But again, we can only go on what Facebook shares, and based on the reported results, it is getting better on various fronts. Probably.
Also worth noting:
"During the last six months of 2020, government requests for user data increased 10% from 173,592 to 191,013. Of the total volume, the US continues to submit the largest number of requests, followed by India, Germany, France, Brazil and the UK."
As Governments around the world come to realize the significance of social platforms, in terms of data gathering and dissemination, more of them are clearly also looking to utilize such for varying purpose. A trend worth watching in future reports.
In addition to this, Facebook has also launched an updated Transparency Center, which provides access to a range of guides that explain how Facebook tackles these key areas of concern.
It will provide more insight into Facebook's various policies, for people who go looking, while Facebook has also provided specific updates on its actions to address counterfeits (a key element of focus as it moves into eCommerce) and data scraping.
This is important insight to have, but as noted, there are some more complex queries around how this data is assessed, and what the full scope of these numbers actually represent. Overall, it's good to see Facebook taking more action on more of these content violations, and looking to provide more transparency into such, but it's difficult to assess the overall impacts without having absolute knowledge of the comparative data,
Which no one has - so for now, these updates are the best measure we have of Facebook's enforcement efforts.
You can check out Facebook's full Community Standards Enforcement Report Q1 2021 here.