Twitter Introduces New Tools to Reduce the Impact of Trolls and Abuse
Last week, Twitter's VP of Engineering Ed Ho announced that the platform would be ramping up its efforts to combat trolls and abuse, noting that they needed to show more on this front:
We heard you, we didn't move fast enough last year; now we're thinking about progress in days and hours not weeks and months.- Ed Ho (@mrdonut) January 31, 2017
Ho's statements sounded strong, and were backed up by CEO Jack Dorsey. Maybe Twitter was finally going to take the lead on this and work to meaningfully reduce the impact of on-platform harassment.
That hope was slightly dampened however when a few days later, Twitter announced this:
We heard your feedback. You can now report Tweets that mention you, even if the author has blocked you. Learn more: https://t.co/pTIoUbo674- Twitter Safety (@TwitterSafety) February 1, 2017
A slightly underwhelming start - basically, this means that you can report people who are harassing you even if you're not necessarily seeing their tweets, a long-time flaw in their system.
But while this wasn't the ground-breaking progress many had hoped for, it was still a start, and a positive step to see Twitter actively working towards stamping out anti-social behavior.
Furthering this new push, today (which is also Safer Internet Day), Twitter has announced three new measures to combat trolls and abuse. And while they're still not necessarily game-changers, they do underline that Twitter is taking action and are working to solve one of their core problems.
Here's what's been announced.
1. "Stopping the creation of new abusive accounts"
The first measure goes to the heart of one of Twitter's biggest problems - that being that abusers, once reported, can simply open up a new account and start harassing you again.
To combat this, Twitter has announced that they're "taking steps to identify people who've been permanently suspended and stop them creating new accounts".
Now, how Twitter might actually go about doing this is not explained - Twitter's not providing any details because it would likely help those trying to circumvent the system. Some possible options could include detecting a users' IP address and blocking new accounts from it (though this may not be possible in all cases), while they may also have measures in place to identify similar accounts created shortly after a suspension has been implemented.
As noted, there are ways to circumvent such processes, which is why Twitter's not sharing the details, but if effective, the new process could help Twitter eliminate repeat offenders - and may even provide a way for the platform to get rid of fake accounts used by bot traders to boost follower counts.
Bot traffic has come under increased scrutiny in the wake of the recent US Presidential Election, with a recent study identifying huge networks of fake accounts which are being used to send spam and boost interest in trending topics. If Twitter can improve it's methods of identifying the sources of such traffic, it may be able to tackle this problem too - though given the market emphasis on active users, there is a question of whether it's in Twitter's interest to eliminate such fakes, beyond abusive profiles.
2. "Introducing safer search results"
Twitter's also introducing a new 'safe search' option which "removes Tweets that contain potentially sensitive content and Tweets from blocked and muted accounts from search results."
You'd think the removal of accounts you've blocked and/or muted from search would be a given, but evidently not.
The new option will give users the ability to eliminate these results from their search experience - you'll still be able to find such content if you want to, but, according to Twitter, "it won't clutter search results any longer".
Users will be able to control these search filters once they're made available.
3. "Collapsing potentially abusive or low-quality Tweets"
And the last addition Twitter's announced is a new algorithm-defined way to order tweet responses in order to hide "potentially abusive and low-quality replies"
Twitter's using machine learning to identify lower quality tweets, using qualifiers like the date an account was created, follower to following ratio, and other spam detection measures to categorize the originating author and filter their replies accordingly.
As you can see from the above GIF, you'll still be able to see these 'low quality' responses, they'll just be hidden behind a 'Show less relevant replies' prompt - similar to your junk e-mail folder. Given this, Twitter's not actually eliminating abusive content, they're just hiding it from view, but as these filters will be applied to all accounts, the results should have a meaningful effect. Giving these accounts no visibility will reduce their presence, and hopefully, their motivation to tweet such comments.
These are Twitter's latest efforts in their ongoing push to eliminate trolls and abuse, a problem that's plagued the platform for years. Indeed, reports last year suggested that one of the reasons potential suitors opted against making any serious bids to buy Twitter largely revolved around the platform's abuse problems and the potential damage they could cause.
On this, it's also interesting to note that Twitter shares have increased to their highest levels in recent months on the back of today's announcement.
Twitter also introduced the ability to mute specific words from your timeline back in November, as well as an AI-powered quality filter in August which can detect and eliminate questionable tweets from your timeline.
These measures show that Twitter is working to address the problem, that they are actively seeking new solutions to one of social media's biggest pain points. Those solutions are not easy, there's no simple way to censor a users' timeline, especially given the real-time engagement that makes Twitter what it is.
But Twitter is trying to find answers and ways to improve community safety.
And they're not done yet - as noted by Ho:
We'll be rolling out a number of product changes in the days ahead. Some changes will be visible and some will be less so.- Ed Ho (@mrdonut) January 31, 2017
These new measures are not a magic bullet to eliminate trolls and abuse, but such an option simply doesn't exist. Hopefully through the accumulation of tools and options - and increased transparency on their efforts - we'll see Twitter take positive steps towards building a safer environment for all users.
Twitter's new safety measures will be rolled out in the coming weeks.
Follow Andrew Hutchinson on Twitter