Twitter has long been criticized for its lack of action against trolls and abuse, which many people would say significantly detracts from the Twitter experience. The problem, from Twitter’s perspective, is that many of the complaints about this type of behavior relate to issues which are not in violation of Twitter’s rules – just because you don’t like something, that doesn’t necessarily mean another user should be punished.
But it’s obviously a problem, and one Twitter is determined to work out - yet if they can’t use their regular suspensions and bans to help bring the wider community into line, how can the company create a more civil, engaging atmosphere, without restricting free speech?
The answer – or at least part of it – could lie in their new algorithm update, which will limit the exposure of tweets from accounts which see regular complaints.
As explained by Twitter:
“Today, we use policies, human review processes, and machine learning to help us determine how Tweets are organized and presented in communal places like conversations and search. Now, we’re tackling issues of behaviors that distort and detract from the public conversation in those areas by integrating new behavioral signals into how Tweets are presented. By using new tools to address this conduct from a behavioral perspective, we’re able to improve the health of the conversation, and everyone’s experience on Twitter, without waiting for people who use Twitter to report potential issues to us.”
The new changes, as noted, will only affect the presentation of tweets in search results and ‘public conversation’ – so, tweets within a larger reply or hashtag-based stream, not on individual profiles or within the timelines of your direct followers.
The updated signals Twitter will take into account on this include:
- Whether you tweet at large numbers of accounts you don’t follow
- How often you’re blocked by people you interact with
- Whether you’ve created many accounts from a single IP address
- Whether your account is closely related to others which have violated its terms of service
The idea is to use these measures as a means to detect those accounts which detract from the broader conversation, including bots and scammers looking to cheat their way to increased Twitter exposure.
And if you do fall foul of these rules, the reach impacts could be significant – your tweets will not be visible at all in public conversations or search. They won’t be removed (as they don’t violate Twitter’s rules), but they’ll be hidden behind a ‘View more results’ note - which, by Twitter’s thinking, will make the conversation better and more engaging.

And they might be right – based on the example above, you can see how removing those questionable replies would make this a better stream.
But there are potential flaws here too - maybe not enough to detract from Twitter’s broader efforts to eliminate such negative behaviors. But still, concerns nonetheless.
First, the positives – anyone who’s followed a hashtag stream on Twitter in recent times will know that it’s almost pointless trying to stay up to date with a happening, major event via tags, as they quickly get flooded with bots and junk. This new system could help fix this, as it will detect these questionable accounts and hide their tweets from view - which will definitely help improve Twitter’s newsworthiness.
There’s also, as highlighted in Twitter’s example, the benefit of improved discussion threads – really, it’s become something of a competition for certain operators to try and get the top comment on Donald Trump’s tweets, for example, in order to boost their exposure. Given many of these accounts would also fall foul of Twitter’s new regulations, it could see them disappear, again improving the discourse in the app.
But as noted, there are definitely some concerns, and social media marketers need to take note.
For one, this new system will be automated, and the affected accounts won’t (at this stage) be informed when they’ve been restricted. That means that if you fall foul of the system, you won’t even know – and because it’s automated, and not human-reviewed, that could enable competitors to cause you Twitter reach penalties.
How? By reporting you. What if a competitor wanted to limit your reach, so instead of buying followers, they paid some shifty provider to mass report your account? Enough reports and you’d think that might see you penalized – the lack of additional violations could possibly exclude this, but still, it is a potential concern (note: Twitter says that the breadth of measures taken into account should stop this from happening).
It will also mean that marketers will need to be more wary about tapping into trending news streams. Oreo made trendjacking a mainstream social media marketing tactic with their ‘dunk in the dark’ tweet during the 2013 Superbowl (if it wasn’t already), and it’s since become a key way to boost your reach, with targeted, themed content.
But a lot of those tweets don’t hit the mark – this new system will make trendjacking more risky, because if you do fail to connect, that could see you reported, blocked by individuals, and your reach then reduced as a result.
The answer here, of course, is that you should only tap into relevant trends, but even then, it does mean a higher level of risk.
And the other concern is, in addition to accounts not being notified when they’re penalized, is that their tweets will remain restricted till Twitter deems them worthy again. Twitter has some work to do on this front, which they acknowledge, as penalized accounts will need to know how they can recover from such penalties – and when they’re being penalized.
Overall, though, the new regulations should only impact a small number of accounts. Twitter says that less than 1% of accounts make up the majority of reports, and that these few accounts disproportionately detract from the user experience. As such, the bans shouldn’t be widely felt, but should be broadly noticed, improving the experience.
Twitter also notes that, in testing, the new system has resulted in a 4% drop in abuse reports from search and 8% fewer abuse reports from conversations. The benefits outweigh the potential negatives, but still, it’s a significant shift for the platform, and it’s worth monitoring the actual impacts, and being aware of the changes as they roll out.