Twitter's ongoing battle with trolls and abuse has been well documented. The issue was brought to a head again recently with the high profile case of actor Leslie Jones, who announced she was quitting the platform in the wake of repeated attacks - the subsequent investigation of which lead to a permanent ban being placed on Breitbart blogger Milo Yiannopoulos.
Over the years, Twitter's tried various measures to tackle the problem, but when you're running a 'global town square', a platform that prides itself on giving everyone a voice and enabling them to share it in real time, there will, inevitably, be issues.
Last week, BuzzFeed reporter Charlie Warzel published a 5,800 word examination of Twitter's various failings on this front, which was highly critical of Twitter's handling of abuse:
"For nearly its entire existence, Twitter has not just tolerated abuse and hate speech, it's virtually been optimized to accommodate it."
The article was so damning that it prompted Twitter to issue a statement in response, in which they suggested that much of the information was incorrect - but rather than get caught up in back-and-forth, they noted that they'd instead focus on continuing improvement.
And today, the platform has launched two new measures to help on this front.
In order to give users more ways to control their on-platform experience, Twitter has announced that they're rolling out their "Quality Filter" tool to all users.
The quality filter option has been in testing for more than a year, with selected verified users getting access to the tool last May.
Well, that's an interesting & welcome addition, Twitter! (Was prompted about this on opening the app.) pic.twitter.com/Ka2VDvqwNf- Anil Dash (@anildash) March 23, 2015
Twitter's quality filter uses an AI-powered algorithm to detect and eliminate questionable tweets from your timeline, including threats and offensive or abusive language. The filter also seeks to remove duplicate tweets or content that appears to be automated, taking into account a range of factors like account origin in order to remove excess junk from your tweet stream.
And importantly, the system won't filter content from people you follow or accounts you've recently interacted with.
Also note, as per the example here, users will now be able to access their notifications settings direct from the Notifications tab on mobile.
In addition to the Quality Filter, Twitter's also giving users the option to receive notifications only from profiles they follow.
With this option, if a user is being targeted by trolls or getting comments from people they don't know, they can switch this setting on and only receive messages from the people they've actually chosen to engage with.
These are both beneficial additions, for sure, but neither eliminates abuse so much as they hide it from view. And that'll reduce its impact - and any progress on this front is a positive - but Twitter does still has a way to go before they're able to get a better handle on the problem and remove such actions entirely.
As an interesting side note on this, tech investor Jason Calacanis recently wrote an article for recode in which he outlined his idea of mass verification for all Twitter users.
Calacanis' theory is that Twitter could tackle abuse by removing anonymity from the platform, with all users being verified - and registered via their real world identity and details - by default.
"What this means is you will only see users with the blue check mark, with Tweets from unverified accounts being "blurred out."
Under Calacanis' proposal, if a user clicks on a blurred out tweet, like the one in this example above, they'd have the option to block, follow or remove that specific comment from their stream.
Using the verification process, all Twitter profiles would need to be linked to a person's real world identity, which would then leave them more open to potential legal follow up, and thus, reduce the likelihood of them using the platform for harassment.
It's an interesting idea - again, it wouldn't eliminate Twitter's issues fully, but it would reduce that barrier of anonymity. But then again, in the case of Milo Yiannopoulos, anonymity wasn't the issue.
In the long term, the real solution for dealing with trolls and abuse could be AI. Yahoo has reportedly developed an artificial intelligence system which has been able to identify abusive comments in 90% of cases. That's a promising development, and it may signal the way forward - and no doubt Facebook is also working on a similar detection system.
For example, Instagram - which is owned by Facebook - is currently testing a new comment moderation filter which can automatically filter out potentially offensive content from your feed based on a listing of words they've identified as problematic. Instagram's also working on a system which would enable users themselves to create a list of words - or even emoji - which they don't want in their comment streams.
Facebook currently has a team of moderators who sift through and remove offensive content and action user reports, a job that can have significant psychological fallout given the material these people are exposed to. As such, it's in Facebook's interests to develop better systems to handle such issues - they're already working on advanced image recognition AI which could help on this front, while The Social Network has also put a big emphasis on evolving their on-platform security options.
But as noted, Twitter's real-time stream is a little more challenging. Really, AI is the only way Twitter could stamp out all abuse, as the system needs to identify offensive content as it's posted - catching it after the fact is already too late. In future, hopefully these new advances will be used in conjunction to form a powerful system that can end such incidents. But again, those advances are still some way off.
Tackling trolls and harassment is a massive, important challenge in all forms, but in social media in particular, due to the massive expansion in adoption and use of social platforms as an interactive device. This is especially relevant when you consider the role social platforms now play in the communicative habits among young, and more impressionable, people.
An important note here - suicide is the second leading cause of death for people aged between 10 and 24 in the U.S.
It's crucial that we do all we can to combat this issue.