While debate rages over Facebook's stance on political advertising - and its decision not to remove ads which include false claims - The Social Network has this week released its latest Community Standards Enforcement Report, which outlines all the content removals and actions it's taken as a result of posted material that breaks its established rules
And there are some worthy points of note - first off, on content relating to drugs and firearms, Facebook says that it has significantly improved its detection methods, enabling it to remove more content than ever.

It's also improving its capacity to detect content which violates its rules around child nudity and the exploitation of children:
"In Q3 2019, we removed about 11.6 million pieces of content, up from Q1 2019 when we removed about 5.8 million. Over the last four quarters, we proactively detected over 99% of the content we remove for violating this policy."
This, in particular, is a key area for Facebook, and it's good to see ongoing improvement in its automated detection systems - which, ideally, will also save manual, human moderators from having to view such content.
Facebook's also provided an update on its efforts to remove content related to suicide and self-harm:

Instagram has made this a key area of focus of late, which makes sense, given Instagram's popularity among younger, more influential users. Instagram announced new measures to ban images of self-harm back in February, then expanded that to include graphic images and memes last month.
Facebook has also detailed its efforts on addressing terrorist propaganda, saying that it's still working to better detect terror-related content.
"While the rate at which we detect and remove content associated with Al Qaeda, ISIS and their affiliates on Facebook has remained above 99%, the rate at which we proactively detect content affiliated with any terrorist organization on Facebook is 98.5% and on Instagram is 92.2%. We will continue to invest in automated techniques to combat terrorist content and iterate on our tactics because we know bad actors will continue to change theirs."
The report also shows that government requests for user data increased by 16% in the first half of 2019.
Also, while its systems for detecting fake accounts have improved, the number of fake accounts on the platform has ramped up in recent months. Facebook says it's removed 5.4 billion fake accounts thus far in 2019, up from 3.8 billion in total last year. Facebook also estimates that around 5% of its user base is still made up of fake profiles.

The removal counts show that it is improving, which Facebook notes in the report:
"Because we are blocking more attempts to create fake, abusive accounts before they are even created, there are fewer for us to disable and, thus, accounts actioned has declined since Q1 2019. Of the accounts we actioned, the majority were caught within minutes of registration, before they became a part of our monthly active user (MAU) population."
That means that Facebook doesn't believe that its 2.5 billion MAU count is impacted by fakes beyond the noted 5% - but that still equates to some 125m active fake profiles on the platform. With more activist groups looking to use the platform to spread their messaging, this remains a key area of concern.
In addition to the report, Facebook has also launched a new, simplified overview of what's allowed on Facebook, which includes this interesting clarification:

Which seems largely aligned to the debate around political ads.
This statement is also interesting:
"The answer to misinformation can’t be less information – but more context."
Various research reports have actually shown that more context, in the case of social media specifically, is not helpful, as the rise in false information enables people to reinforce their incorrect beliefs, while simultaneously blocking out dissenting opinion. It's the ultimate tool of cognitive dissonance - people now have access to more information than ever, at any time, yet we're arguably seeing more debate over established facts than in times past.
Would that suggest that context is the solution?
Regardless, this is Facebook's stance, which, as noted, seems to align with its perspective on political ads.
Is that a good thing? No doubt we'll hear a lot more on this element over the next year, as we head into the 2020 US Presidential Election.
You can read Facebook's full Community Standards Enforcement Report here.