It seems that Facebook has been forced into a corner on yet another issue, this time sparked by an investigation into The Social Network’s practices around content moderation, and the protection of minors specifically.
As part of a report which aired on Channel 4 in the UK earlier this week, a journalist in Ireland went undercover, posing as a new employee of CPL Resources, a company to which Facebook outsources content for moderation.
In the training process, the reporter was told to ignore users who appeared under Facebook’s age threshold of 13, with the CPL trainer explaining that:
“We have to have an admission that the person is underage. If not, we just like pretend that we are blind and that we don’t know what underage looks like.”
This was one of various concerning elements stemming from the investigation, to which Facebook has responded with a detailed blog post explaining that CPL had failed to meet their standards.
But on the age threshold specifically, Facebook has vowed to take action.
As explained by Facebook:
“We do not allow people under 13 to have a Facebook account. If someone is reported to us as being under 13, the reviewer will look at the content on their profile (text and photos) to try to ascertain their age. If they believe the person is under 13, the account will be put on a hold and the person will not be able to use Facebook until they provide proof of their age. Since the (Channel 4] program, we have been working to update the guidance for reviewers to put a hold on any account they encounter if they have a strong indication it is underage, even if the report was for something else.”
Given the concerning nature of the claims, TechCrunch has reported that Facebook will now ramp up its action to detect and remove accounts created by those under the age of 13.
That doesn’t mean Facebook will proactively seek out suspect profiles, but they will ensure that all such accounts which they become aware of are reviewed and actioned accordingly, as opposed to merely putting them on hold – or as highlighted by CPL, ignored altogether.
It’s important for Facebook to take the lead here. Social platforms can expose children to all sorts of inappropriate content and unsavory characters, and any effort that can be made to restrict such should be undertaken. It is, again, somewhat concerning that it’s taken an undercover investigation to expose such, but whatever prompts action – social platforms need to work to protect society's most vulnerable, which, to their credit, Facebook has announced they’re doing since becoming aware of such claims.
But the broader concern, as with Cambridge Analytica, is that Facebook has not proactively taken action against such behavior. These actions have been going on inside Facebook’s walls, without anyone on the outside having any oversight, and once alerted, Facebook has done what they need to and taken action.
But how many other questionable practices are in place that we don’t know about? What other nefarious activities could be linked to Facebook which we can’t see, because they’ve not been exposed yet?
From incorrect ad metrics, to data sharing concerns to these latest claims, all of these have come about because of external voices, not because Facebook has detected such themselves. The issues highlight significant flaws in Facebook’s internal governance, which is especially concerning when you consider that Facebook is the largest, and arguably most influential, platform in existence.
As noted, Facebook is now working to improve their detection of underage users, they are putting new plans into effect – and worth noting too, Facebook’s new Messenger Kids app was recently made available in more regions.
But each of these issues adds another element to Facebook’s ongoing privacy concerns.
Will these eventually see Facebook, and other social platforms by extension, see more stringent, government-defined regulation?