With misinformation about COVID-19 ramping up, YouTube is taking a tougher stance on any videos which include false health claims, or cite information that runs contrary to official health advice.
In an interview with CNN, YouTube CEO Susan Wojcicki has outlined the increased action YouTube is now taking on this front, with the provision of accurate, timely information playing a key role in halting the spread of the virus and protecting communities.
As per Wojcicki:
"Of course, anything that is medically unsubstantiated - so people saying like 'take vitamin C, take turmeric, we’ll cure you' - those are examples of things that would be a violation of our policy. Anything that would go against World Health Organization recommendations would be a violation of our policy."
Wojcicki notes that this is not an expansion of YouTube's existing regulations, as such, as they've always had rules against misinformation in place. But YouTube is taking a more proactive approach to removing such, in order to protect the public interest.
Both YouTube and Facebook have been ramping up their efforts to halt the spread of coronavirus misinformation - which, no matter how you look at it, is now a critical area of response to a crisis of this type. A study by Pew Research in 2018 found that 68% of Americans now get at least some of their news content from social platforms, with Facebook and YouTube being the leading social news sources.

Facebook, in fact, is now a bigger facilitator of news and information than newspapers - so while it may not feel like you, personally, are getting your news input from online sources, clearly, a significant amount of people are. Which puts both YouTube and Facebook in a position of responsibility in connecting people to accurate, relevant reportage.
But then again, Facebook, in particular, has sought to maintain a more 'hands-off' role in this respect. Facebook's view, at least with respect to claims in political ads, is that it should not be the arbitrator of what's true and what's not, and that people should be free to discuss what they like on its platform. There are certain lines that can't be crossed, of course, in regards to speech that's likely to cause harm. But in terms of divisive debate, Facebook would prefer to stay out of it, and not take sides in any such discussion.
As such, it's interesting to see how the platforms are responding to COVID-19, a situation that requires definitive action in the public interest.
And that has lead to conflict with some user groups - for example, both Facebook and YouTube recently announced that they would remove content which suggested that 5G was facilitating the spread of COVID-19.
Wojcicki made specific note of this action in her interview with CNN:
"No established health organization says that 5G is the source of the issue, so we quickly deemed that a violation of our policies and removed that content."
In many ways, that's a clear-cut violation - a false medical claim that can cause potential harm - but the move has lead to accusations of political bias, and 'big brother' like tactics to censor discussion online.
This is a space that neither platform wants to be in, potentially alienating certain user groups, which could, eventually, see those users heading away from their platforms in favor of alternative apps with more relaxed policies on free speech. That runs counter to the domination plans of Facebook and Google - in order to maintain their position at the top of the social heap, they need to maximize engagement, and part of that involves providing a space where all perspectives can be heard, boosting discussion.
But all perspectives shouldn't be heard. Some perspectives can be dangerous, some can lead to violence. Giving everyone a voice also means that, at times, you're inevitably going to amplify movements that should not be be given the space to grow.
But who makes the call on that? In the case of COVID-19, its fairly clear-cut - there's a clear public health need to limit the reach of certain movements. That's less obvious on other divisive issues, like, say, climate change or immigration policy - yet, facilitating discussion on these fronts could, theoretically, be just as damaging.
So how does Facebook or YouTube decide on what it needs to take action on, and what it doesn't?
YouTube's increased action to remove misinformation around COVID-19 once again highlights that it is possible for the platforms to do more to reduce the spread of harmful misinformation online. So long as the platforms themselves agree that it is, indeed, harmful enough to take stronger action on.