Facebook has published its latest Community Standards Enforcement Report, which covers all the content removals and enforcement actions that the platform enacted throughout the second quarter of 2021.
The report includes some interesting notes about key trends, as well as advancements in Facebook's detection systems. First off, addressing the key need of the moment, Facebook says that it removed more than 20 million pieces of content from across Facebook and Instagram for violating its policies on COVID-19-related misinformation.
"We have removed over 3,000 accounts, pages, and groups for repeatedly violating our rules against spreading COVID-19 and vaccine misinformation. We also displayed warnings on more than 190 million pieces of COVID-related content on Facebook that our third-party fact-checking partners rated as false, partly false, altered or missing context."
As the vaccine roll-out continues on, countering these movements is key to maximizing take-up, and given its massive reach, this is an important area for Facebook, specifically, to focus on. Of course, Facebook has also been widely criticized for providing a platform for health misinformation in the first place, but the numbers here indicate that Facebook is working to counter these elements, in many aspects, which, ideally, will limit the impact of such.
In terms of other key trends, Facebook says that its efforts to tackle hate speech continue to yield positive results:
"Prevalence of hate speech on Facebook continued to decrease for the third quarter in a row. In Q2, it was 0.05%, or 5 views per 10,000 views, down from 0.05-0.06%, or 5 to 6 views per 10,000 views in Q1."

At Facebook's scale, five views per 10,000 would still mean that a significant amount of hate speech is making it through to users, but again, Facebook's systems ate improving, which should limit the impact of such moving forward.
Though, at the same time, it is worth noting that Instagram has seen a surge in hate speech removals.

More action on such is a positive sign, but it may also point to shifting focus for such content, which is also reflected in the gradual growth in the detection of dangerous organizations in the app.

As you can see, Instagram is actioning more of these groups over time, which, again, is good for enforcement, but may point to shifting trends in platform usage, which could be a broader concern for IG over time.
Another concerning development is the sharp rise in suicide and self-injury content actioned.

Facebook says that this spike is largely due to a technical fix, which enabled its moderators to 'go back and catch violating content we missed'. But still, it's a concerning trend to watch, and it'll be worth noting whether it sees a continued rise in this element.
Also, fake accounts on Facebook are still at 5% of overall profile numbers - the same rate that Facebook has been reporting for years.
Facebook says that it took action against 1.7 billion fake profiles in the period.

Basically, there are still a lot of fake accounts on Facebook, despite its advancing efforts to detect such. In fact, as noted, the relative rate of fake accounts never seems to change, even as its tools evolve, so there will seemingly always be several million fake profiles on Facebook at any given time.
Which seems like something that could be addressed, but evidently that's not the case.
Still, overall, Facebook says that its automated detection processes are improving:
"Our proactive rate (the percentage of content we took action on that we found before a user reported it to us) is over 90% for 12 out of 13 policy areas on Facebook and nine out of 11 on Instagram."
So for violating, offensive content, Facebook is limiting exposure, even if it can't improve its exposure rates on all fronts. It's difficult to tell what exactly that means, in a practical sense, because for the most part, Facebook's raw enforcement numbers are largely stable in most areas, despite Facebook's usage figures steadily increasing.
You would expect, then, that the total enforcement stats would rise too, but outside of these noted elements, most others have stayed steady, even with the noted improvements.
Does that mean Facebook is getting better at detecting violations, or worse - or staying the same? It's a little difficult to say, but Facebook is taking action on a lot of content, and detecting many violations before anyone sees them.
Which seems good, but there's nothing significant in these figures to clearly indicate major improvement this quarter.
You can read Facebook's full Community Standards Enforcement Report for Q2 2021 here.