Given Facebook's more recent record with privacy and permissions loopholes, Messenger Kids, which was launched by The Social Network in late 2017, has always seemed to occupy a somewhat precarious position.
Now the child-friendly variation of the company's messaging flagship tool has suffered its first major blow - as reported by The Verge, over the past week or so, Facebook has been informing parents of children who have been using Messenger Kids of a flaw within its permissions process, which could have exposed their children to content from unapproved users.
As explained by The Verge:
"The bug arose from the way Messenger Kids’ unique permissions were applied in group chats. In a standard one-on-one chat, children can only initiate conversations with users who have been approved by the child’s parents. But those permissions became more complex when applied to a group chat because of the multiple users involved. Whoever launched the group could invite any user who was authorized to chat with them, even if that user wasn’t authorized to chat with the other children in the group. As a result, thousands of children were left in chats with unauthorized users, a violation of the core promise of Messenger Kids."
That violates the core promise of Messenger Kids, which is designed to provide parents with full control over who their children interact with - and the reassurance that comes with that.

Facebook has confirmed the issue, explaining that:
"We recently notified some parents of Messenger Kids account users about a technical error that we detected affecting a small number of group chats. We turned off the affected chats and provided parents with additional resources on Messenger Kids and online safety.”
Of course, 'small number' is a relative term, and at Facebook's scale, 'small number' likely means, at a minimum, thousands of users.
And regardless of the scale in question, the problem is that it again highlights a significant flaw in Facebook's process, and the company's capacity to manage information and user privacy. This is especially relevant in this case, not only because young, vulnerable users are impacted, but also because this is the core functionality of the app - this is the one key thing that this specialized version of Messenger is supposed to be designed to avoid.
If Facebook can't even get such right when it makes it a specific focus, what chance is there that it can protect user information in other applications?
That's the key issue of concern here. We already know that Facebook has a dubious track record on how it uses and protect your data, but the question now, in the modern social media landscape, is whether Facebook can learn lessons from its past mistakes, and fix its systems to ensure the same never happens again. It may only be a lesser access issue, in terms of scale, in this instance, but the principle fault does not look good, and doesn't bode well for Facebook's broader capability.
As noted, Messenger Kids seems like a questionable proposition in the first place, but Facebook has gone to great lengths to assure users that all is fine. The same as it has with its in-home smart speaker device, the same as it will with its upcoming AR glasses and VR hardware. 'You can trust us', Facebook keeps saying, 'we're doing everything we can.'
But over and over again, we see examples of this not playing out in real life. It may be a smaller error this time around, in relative terms, but the cumulative reputational damage continues to erode faith in The Social Network.