After months of concerns around questionable content linked to YouTube channels aimed at kids, the platform has this week announced a new crackdown, removing more than 400 channels and their related comments after new examples highlighted potential pedophile activity within the app.
The move comes after the release of a video by YouTuber Matt Watson, which showed how searching terms like “bikini haul” on the platform can lead to videos of children that feature predatory messages in the comment sections.
Following the release of the video, companies like Epic Games, Nestle and Disney all pulled ads from YouTube, while many more called on the company to investigate as they considered their own ad spend. In response, YouTube has now removed large groups of accounts and their related activity, and has vowed to strengthen its detection tools on this front.
As per AdWeek, YouTube also sought to reassure its major ad partners through direct meetings, while it also sent out a memo detailing its planned next steps:
"According to several parties with direct knowledge of the matter, the Alphabet-owned company held a conference call with representatives from all major ad agency holding companies, as well as several unnamed advertisers."

This also comes after the platform released an updated explanation of how its community guidelines violation processes work earlier this week, which seeks to clarify how it enforces its rules, while YouTube has also been working to remove recommendations for 'borderline' content, which can facilitate the spread of questionable movements.
It's clear that YouTube is being forced to take more action on such content, with its business interests looking to take a hit, which is a positive, in that it will see more questionable material removed, but a concern, in that it's taken such steps to force their hand.
Will this be the way it goes for all platforms? Will Facebook be forced to reconsider its position on similar content, or the role it plays in the spread of fake news, if advertisers threaten to withhold future spend?
Social platforms have long sought to distance themselves from 'editorial' decisions like this, preferring to let their algorithms show people more of what they want, whatever that may be, but more disturbing, concerning elements like this could be building a broader movement against that 'hands off' approach, and against sole reliance on algorithms to keep users engaged.
Of course, all platforms were put on notice by the revelations of electoral interference leading into the 2016 US Presidential Election, and all have stepped up their efforts on this front, but it now seems that we're seeing a spread of this action across to other, equally important (if not more important) areas, which is a positive shift, even if it is based on business impacts.
Either way, it does seem that this is going to become a bigger focus moving forward. Social platforms now play a key role in the spread of questionable content, and while they've been able to distance themselves from such till now, the pressure is now rising for more action to be taken.