In recent months, Twitter has been upping its efforts to remove spam and robo-profiles from its platform as it seeks to halt the distribution of fake news via tweet.
The most notable action on this front was the removal of inactive profiles from follower counts back in July, which saw many prominent users lose large chunks of their followings. But more than that, Twitter says that it's now detecting and challenging more than 9 million potential spam profiles per week, which has resulted in a significant decline in spam reports from users (from Twitter's Q2 report in July).
And with the US Midterms coming up, Twitter has this week announced its next push on this front, with a new set of rules that'll impact profile creation and content distribution - and may also provide more avenues for appeal to users who've had their image stolen, which has thus far not, in itself, been a breach of Twitter's rules.
The updated rules are as follows:
As explained by Twitter:
"We have heard feedback that people think our rules about spam and fake accounts only cover common spam tactics like selling fake goods. As platform manipulation tactics continue to evolve, we are updating and expanding our rules to better reflect how we identify fake accounts, and what types of inauthentic activity violate our guidelines. We now may remove fake accounts engaged in a variety of emergent, malicious behaviors. Some of the factors that we will take into account when determining whether an account is fake include:
- Use of stock or stolen avatar photos
- Use of stolen or copied profile bios
- Use of intentionally misleading profile information, including profile location"
Thus far, Twitter hasn't viewed impersonation in this way.
As per Twitter's current rules on impersonation:
"An account will not be removed if:
- The user shares your name but has no other commonalities, or
- The profile clearly states it is not affiliated with or connected to any similarly-named individuals.
Accounts with similar usernames or that are similar in appearance (e.g. the same avatar image) are not automatically in violation of the impersonation policy. In order to be impersonation, the account must also portray another person in a misleading or deceptive manner."
The new rules appear to be an upgrade of this, giving Twitter more scope to act on reports as they see fit, but the focus is on fake profiles, and the technicalities here are still a little vague.
That likely means that if you spot someone using your profile image and report them, as happens right now, Twitter will say that they're not violating the rules and that'll be it (they rarely provide any extra detail on such reports).
What this change is more likely to be focused on is tweet replies like this (in response to an actual tweet from Trump).
At a glance, this looks like it's from Trump - it uses his profile image, his name. But, of course, its clearly not from Trump's actual account. But not everyone would notice the detail, especially if the tweet copy also mimicked Trump's style.
More commonly, profiles like this tweet out links like "a gift to my friends, click here", which, inevitably, leads users to a scam site. With this new rule change, Twitter's basically giving itself more capacity to get rid of fake profiles like this, with more rules to point to, rather than having to find more specific, technical qualification for removal.
"As per the Twitter Rules, if we are able to reliably attribute an account on Twitter to an entity known to violate the Twitter Rules, we will take action on additional accounts associated with that entity. We are expanding our enforcement approach to include accounts that deliberately mimic or are intended to replace accounts we have previously suspended for violating our rules."
This has been a common problem on Twitter - because users can simply create another account, bans and removals have only acted as a temporary annoyance, as opposed to a deterrent for cush behavior.
Twitter has upgraded its policies to tackle such misuse before, adding in detection tools which can better determine if accounts are being created from the same IP address, and limiting a users' capacity to simply start a new account. This new measure adds another element to this, providing more options for Twitter to take action on detected activity.
Distribution of hacked materials
"Our rules prohibit the distribution of hacked material that contains private information or trade secrets, or could put people in harm’s way. We are also expanding the criteria for when we will take action on accounts which claim responsibility for a hack, which includes threats and public incentives to hack specific people and accounts. Commentary about a hack or hacked materials, such as news articles discussing a hack, are generally not considered a violation of this policy."
Again, this seems like more of a 'widening the net' update - rather than introducing a new, technical rule, it expands the scope of the current options Twitter has at its disposal, giving it's moderators more ways to limit the impact of such activity.
Though, of course, as with all Twitter's changes, there are some violations which you'd expect would still get a pass. If President Trump had called on Russian hackers to find Hillary Clinton's lost emails via tweet, would that get banned under these new rules? Technically, it should, but its newsworthy nature puts it into a different category.
That's where Twitter's rules often start to crumble under pressure - with 335 million users across the world, there are a heap of gray areas like this, smaller, potential violations that may not qualify, and that can be frustrating for users who don't understand why, exactly, Twitter chooses when and when not to take action.
And while Twitter's broader efforts to remove questionable actors, particularly in relation to elections, should be applauded, the platform rules still feel a little loose, a little unclear on the specifics, which will no doubt be demonstrated in practice also.
But that said, the fact that Twitter has introduced new rules shows that they are listening, and they are paying attention to such concerns. Now it just comes down to enforcement - and enforcement at global scale, which is never easy to enact.