With U.S. President Donald Trump being a big fan of Twitter, and its capacity to deliver his message direct to the public, various Trump tweets have caused significant angst – particularly barbs like this directed towards hostile foreign leaders:
Just heard Foreign Minister of North Korea speak at U.N. If he echoes thoughts of Little Rocket Man, they won't be around much longer!— Donald J. Trump (@realDonaldTrump) September 24, 2017
Given Trump’s propensity to push boundaries, many have questioned why Twitter’s doesn’t ban him. ‘Surely potential incitement of a nuclear war is reason enough to remove someone from the platform’. Right?
Well, no, apparently. With Twitter coming under more scrutiny of late over their internal policies, following another raft of strange decisions, the platform has this week released an amended policy document which aims to explain exactly why ‘newsworthy’ tweets, like those from the President, are not judged in the same way as others.
As explained by Twitter:
“To help ensure people have an opportunity to see every side of an issue, there may be the rare occasion when we allow controversial content or behavior which may otherwise violate our Rules to remain on our service because we believe there is a legitimate public interest in its availability. Each situation is evaluated on a case by case basis and ultimately decided upon by a cross-functional team.”
Essentially, Twitter’s admitting that users are right, such tweets do violate their terms, but because the platform's aiming to be a source of real-time news – of what’s happening right now – they judge each individual case on its merits.
Twitter says it assesses such cases based, primarily, on three key factors:
Public impact of the content – “A topic of legitimate public interest is different from a topic in which the public may be curious. We will consider what the impact is to citizens if they do not know about this content. If the Tweet does have the potential to impact the lives of large numbers of people, the running of a country, and/or it speaks to an important societal issue then we may allow the content to remain on the service. Likewise, if the impact on the public is minimal we will most likely remove content in violation of our policies.”
Source of the content – “Some people, groups, organizations and the content they post on Twitter may be considered a topic of legitimate public interest by virtue of their being in the public consciousness. This does not mean that their Tweets will always remain on the service. Rather, we will consider if there is a legitimate public interest for a particular Tweet to remain up so it can be openly discussed.”
Availability of coverage – “Everyday people play a crucial role in providing firsthand accounts of what’s happening in the world, counterpoints to establishment views, and, in some cases, exposing the abuse of power by someone in a position of authority. As a situation unfolds, removing access to certain information could inadvertently hide context and/or prevent people from seeing every side of the issue. Thus, before actioning a potentially violating Tweet, we will take into account the role it plays in showing the larger story and whether that content can be found elsewhere.”
The explanations make sense, but they don’t necessarily make Twitter’s enforcement process any less opaque.
Under these parameters, Twitter could still leave or remove almost anything they wanted, and simply refer people back to this list. There’s no way to definitively argue some of these considerations, so pretty much, Twitter’s giving themselves license to rule as they see fit, with these principles as their guideline.
That said, the extra context should help users understand that Twitter’s not going to remove certain tweets, no matter what they might think. Yes, there are exceptions, and exemptions – there are reasons why Twitter won’t intervene. The rules basically state that this is how it is.
There are also, of course, other reasons why Twitter wouldn't want to take action against President Trump and his tweets. While Twitter has previously noted that the ‘Trump effect’ has been minimal, in terms of increasing usage, there’s clearly a lot of public interest in Trump’s thoughts, and that, no doubt, brings a lot of attention to Twitter.
On one hand, it’s definitely good that Twitter's going to effort to explain their policies, but on the other, the lack of definitive markers doesn’t really clarify much at all.
It's a similar story with account verification - last week, Twitter announced that it was pausing verification applications in order to clarify what verification means, following the approval of a white supremacists' account.
This week, Twitter has taken their next steps on this, announcing that they'll be reviewing all verified accounts and removing the blue tick from those they deem not in line with their semi-clarified guidelines.
5 / We are conducting an initial review of verified accounts and will remove verification from accounts whose behavior does not fall within these new guidelines. We will continue to review and take action as we work towards a new program we are proud of.— Twitter Support (@TwitterSupport) November 15, 2017
Which means Twitter will now be taking away people's verification badges, which they've already approved, and they don't have to provide any definitive explanation why.
The update, intended to clarify the verification process, actually seems to confuse it even more, and will no doubt lead to additional disputes and questions.
Twitter has vowed to be more open about such decisions, and is planning to release more information on other policy matters. Hopefully, in future, those descriptions will more clearly explain what users can expect.