Twitter Adds New 'Mute Words' Tool, New Processes to Combat On-Platform Abuse
One of Twitter's five key pillars of focus in their reinvigoration plan, as outlined by CEO Jack Dorsey, is to 'ensure users feel safe to express themselves'. As has been well-documented, tackling on-platform abuse and harassment has been a major challenge for the platform, and has been brought into the spotlight repeatedly by high profile incidents, underlining Twitter's failings in this regard. The problem has become such a concern that when Twitter recently went through the process of meeting with potential buyers of the company, several reportedly withdrew their interest because of concerns about Twitter's troll problem.
And the problem, of course, is far bigger than monetary and reputational concerns - suicide remains the second leading cause of death for people aged between 10 and 24 in the U.S, an age bracket that incorporates Twitter's primary audience demographic. When we're talking about trolls and abuse, it's not just those high-profile cases we need to address. It's an issue we all need to approach with the highest concern.
As such, Twitter's working to provide more options. They introduced a new 'Quality Filter' tool back in August, which uses an AI-powered algorithm to detect and eliminate questionable tweets from your timeline, including threats and offensive or abusive language, as well as new way to filter your notifications.
And today, Twitter's taking the next steps towards combating abuse, highlighted by a new, expanded mute feature which enables users to block out any words, phrases, hashtags, @handles and emojis that they don't want to see.
Want to stop getting notifications for Tweets that contain certain words, usernames, or hashtags? We're giving you that control. pic.twitter.com/awoNHUYbTG- Safety (@safety) November 15, 2016
As explained by Twitter:
"Twitter has long had a feature called "mute" which enables you to mute accounts you don't want to see Tweets from. Now we're expanding mute to where people need it the most: in notifications. We're enabling you to mute keywords, phrases, and even entire conversations you don't want to see notifications about, rolling out to everyone in the coming days. This is a feature we've heard many of you ask for, and we're going to keep listening to make it better and more comprehensive over time."
And as we noted then, it's not a perfect solution - it doesn't stop such abuse from happening and users can get around it by changing the spelling or using different tactics (you can see in the Taylor Swift example above that the snake emoji has been muted, but the words 'cobra' and 'ekans' are still present), but it is a model that's shown promise elsewhere. Facebook introduced similar keyword blocking back in 2011.
But Twitter's also taking this a step further - in addition to keyword blocking, Twitter's also adding a new option which will enable users to mute entire conversations.
The tool will enable users to stop receiving notifications from a specific Twitter thread without removing the thread from their timeline or blocking anyone. Users will be able to mute any conversations in which they're included (where their @handle is mentioned).
The options will provide users with more ways to filter and customize their Twitter experience, and indeed, to feel safer. But as noted, it's not a solution, as such. Those abusive comments will still exist, even if hidden from view.
To combat the issue on a deeper level, Twitter's also added a new 'hateful conduct' reporting option. Now, when users go to report a tweet, they'll see a new 'directs hate' option to denote why the tweet in question is harmful.
Twitter's also retraining its support teams, with special sessions on 'cultural and historical contextualization of hateful conduct', and is implementing an ongoing refresher program to ensure those lessons stay up to date - this is especially important as such language and terms are always evolving.
"We've also improved our internal tools and systems in order to deal more effectively with this conduct when it's reported to us. Our goal is a faster and more transparent process."
Tackling online abuse is a major challenge, and one all platforms need to address. As noted, the impacts of such actions can be devastating, and it's important that victims feel that they're not alone, that they can seek support and subsequent action, and on this front, Twitter's new tools are a step in the right direction. They don't eliminate abuse, but that may be impossible. With these steps, Twitter's showing that it is taking the issue very seriously, and that it's working to build a more inclusive, safe space for their users.
In future, there may be other solutions - some have suggested that Twitter could reduce abuse by removing anonymity from the platform, while advances in AI may also make it a possible option to detect and eliminate such content before it's even seen. But one thing that has become very clear in recent times is that there are many people with vastly different opinions and viewpoints in the world, and as such we're never going to all agree and get along. We can hope that we're able to advance such conversations and use the connectivity of social networks to gain more understanding and perspective, but we also need to accept that conflicts will always be present - and as such, we all need to do all we can to speak up when we see people crossing the line and help out when we see others in need.
The platforms themselves will continue to evolve and advance their tools and options on this front, but we, as users, can also help by flagging and reporting such incidents.
Follow Andrew Hutchinson on Twitter