Facebook has released a new report which highlights the work they’re doing to remove fake accounts and offensive material from their platform.
Among the key numbers, Facebook says that, within the first three months of 2018 alone, they’ve removed:
- 837 million spam posts
- 583 million fake accounts
- 2.5m hate speech posts
- 1.9 million terrorist propaganda posts
Those are impressive numbers, and certainly drive home Facebook’s point – they’re dealing with a lot of questionable content, and they are doing a lot to stop it (Facebook’s also hiring ‘thousands’ more moderators to deal with such issues).
But it’s worth drawing your attention to one of the other figures included here.
See that note in the illustration above at the far right? Facebook says that fake profiles on the platform now account for around 4%, at most, of their MAU count. That means that of the 2.2 billion active accounts on the platform, there are now around 88 million fake profiles.
That’s important to note – back in 2012, Facebook said that their fake profile count was more like 9%, which, at that time, equated to 83 million accounts. They updated this number to 11% in 2013 (138 million fake accounts) and the assumption was that the figure had likely increased in line with overall growth (based on these trends, we estimated that there was around 270 million fake Facebook profiles in circulation late last year).
According to this data, that’s incorrect – it seems that while Facebook has continued to expand, they’ve also improved their detection and removal efforts relating to fake profiles.
Indeed, according to Facebook’s new post:
“The key to fighting spam is taking down the fake accounts that spread it. In Q1, we disabled about 583 million fake accounts — most of which were disabled within minutes of registration. This is in addition to the millions of fake account attempts we prevent daily from ever registering with Facebook.”
That’s a positive step – detecting and disabling so many accounts is a massive undertaking, and Facebook should be applauded for their efforts. But then again, such measures evidently didn't stop the spread of fake news and propaganda in the lead up to the 2016 US Presidential Election.
So what should we take from this? Did Facebook’s detection and removal efforts on fake profiles significantly improve since 2016? It’s hard to say – while Facebook does report the estimated numbers of duplicate and ‘false’ accounts on the platform in their financial reports, there seems to be some conflicts within the figures.
For example, as noted, in their 10-K form filed in 2013, Facebook estimated back then that up to 11% of accounts on its platform were fake - but in their official performance report for 2014, that number is at 2%. Well, depending on how this is measured – Facebook divides this element into “duplicate accounts” (one that a user maintains in addition to his/her principal account) and “false accounts” (user misclassified accounts, i.e. personal profiles for a business, or undesirables, i.e. spammers). If you were to combine those in 2014, that would represent 7%, closer to the rough estimate on overall fakes Facebook provided (between 5.5% and 11.2%).
Going on this, and combining the two figures from their subsequent performance reports, Facebook fake profile levels were at:
- 7% in 2014
- 7% in 2015
- 7% in 2016
- 14% in 2017
Quite the jump. Of course, this could come down to detection – it’s likely that in the wake of the 2016 election controversy, Facebook upped their activity in detecting fake profiles, leading to the increase, and it should be noted that the split in those 2017 numbers is 10% duplicate accounts and 3-4% fakes, aligning with the platform’s latest estimates.
But either way, that’s double the amount of fakes Facebook has reported previously – so while Facebook’s latest activity report is a good step, and it does highlight the scope of the issue they’re dealing with, it also raises questions about the accuracy of their reporting (for their part, Facebook does note in their reports that these are estimates).
Fake profiles are a massive problem in social media marketing. With algorithms putting more emphasis on follower counts and activity, and such figures providing a measure of social proof, it’s important that such data is accurate and represents the actual presence of a brand or business. Fake profiles, and the capacity to buy your following, skews this – as social becomes more ingrained in common business practice, we’re evolving beyond these base metrics, but still, such elements lessen the validity of social platform numbers, dilute your figures and reduce the value of social insights.
The more platforms can do to remove fake profiles, the better, and certainly, Facebook’s dealing with these issues on a larger scale than anybody else. But it’s also important that Facebook remains transparent about such figures, so we can get a clearer estimate on the actual, potential impact, and judge our numbers accordingly.
To some, social media metrics will always be seen as unreliable, inaccurate - as vanity metrics which mean nothing in real-world terms. Fake profiles add to this, so whatever can be done to remove them is a help in boosting the standing of the medium.
It’d be impossible to eradicate all the fake profiles from every platform, but acknowledging the problem - and reporting accurate numbers on such, warts and all - is a good start.