Abuse and harassment on social media is a very real, and very challenging issue.
The problem was highlighted once again recently when actor Leslie Jones announced that she was quitting Twitter in response to repeated personal attacks - the surrounding discussion of which lead to the permanent banning of Brietbart blogger Milo Yiannopoulos from the platform. Twitter, for their part, have made tackling anti-social behavior a key element of their re-invigoration strategy, but as has been underlined by this and many similar cases, it's difficult to host an open, social network, where you're giving everyone a platform to share their voice, and ensure that everyone plays nice.
More than difficult, it's virtually impossible.
All social networks have to tackle this issue, the potential damage caused by such actions can sadly be life-altering. On this front, Instagram, which has seen massive growth in the past few years, is now working to implement its own solution to deal with trolls and abuse - and it'll reportedly be made available to all users soon.
As you may have heard, singer Taylor Swift recently had something of a disagreement with The Wests - Kim and Kanye specifically. The gist of the disagreement was that Swift publicly criticized Kanye West for his mention of her in a recent song - even going so far as to allude to the dispute in her Grammy acceptance speech. But Kanye's wife Kim (whom you may have heard of) later produced evidence of a phone call between Swift and West in which Swift was made aware of the lyrics ahead of the song's release, and even endorsed them, which, fans of the Wests, saw as vindication that Swift was a liar.
Or, in emoji form, a snake.
As such, Swift's Instagram feed was flooded with snake emoji, with comments like this coming in their hundreds across Swift's feed.
(Those little green icons are actually snakes)
Actually, the snakes had been coming for Swift even before that - after her recent break-up with DJ/producer Calvin Harris, there were literally thousands of comments like this across all of Swift's Instagram presence.
But then, just like that, they were gone.
Within a day, all the snakes had suddenly been removed from the singer's comment streams.
As you can see, there's still plenty of mentions and variations of the term 'snake', but all instances of the emoji have disappeared. What's more, users couldn't post more snakes - a blogger over at NYMag tried posting snakes as comments on Swift's profile, and while she was seemingly able to post the little green characters, she and her colleagues weren't actually able to find the comments in Swift's comment streams.
Some users even said they were getting error messages like this when they tried to use a snake emoji on Swift's account.
This then sparked talk of a conspiracy, that Instagram was secretly working to give celebrities like Swift special treatment, but the tool used to cull Swift's snakes is actually the new anti-harassment tool, which the platform is looking to make available to everyone.
As we reported a few weeks back, Instagram's testing out a new comment moderation filter for brand profiles.
As you can see here, the option's built into the options on business pages - you check the option to moderate comments and Instagram will filter out potentially offensive content from your comments feed, based on a listing of words they've identified as problematic. But this is only a basic version of the platform's efforts on this front.
According to Instagram's head of public policy Nicky Jackson Colaco, they're working on a tool that would give users the ability to filter out specific comments on their images - or turn off comments completely, if they'd prefer.
Using this tool, users will be able to list words - or emoji - that they find inappropriate and Instagram will then eliminate them from their comment threads. Instagram's also looking to enable users to switch off comments on a post-by-post or whole profile basis.
And while they haven't officially confirmed this is the tool Swift used to cleanse her feed of snakes, the connection makes logical sense.
According to The Washington Post:
"High-profile accounts will also be the first to get the feature as it goes live in the coming weeks, as this gives Instagram the most valuable feedback in the shortest amount of time. All users will see changes to their comments in the coming months."
It's an interesting option in the battle against abuse, putting more control in the hands of the users themselves - though, as noted by The Verge, the exact options that'll be made available are still to be confirmed, as Instagram is still testing.
But then again, in future, maybe it'll be algorithms that will decide what we do and don't see on our social feeds.
While manual filtering gives users more control, Yahoo is working on a machine learning system that's shown some positive results in detecting and eliminating abusive comments. Really positive results.
"In 90% of test cases, it was able to correctly identify an abusive comment - a level of accuracy unmatched by humans, and other state-of-the-art deep learning approaches."
Yahoo's system takes a different approach than the usual filtering processes. As per The Next Web:
"[The system is] not looking for specific words, but words in combination with others, as well as overall post length, punctuation and other metrics to determine what constitutes abuse. Trained humans also rated the comments, a method used to help train the AI to spot the subtle nuances that it would have missed with a word-specific approach."
This method eliminates many of the issues with false positives that such systems encounter - if you go looking for certain words, the algorithm can't determine whether it's being used in a sarcastic way, for example.
Yahoo's project is still in testing phase, but it seems like a positive step - and when you consider the other advances being made in language recognition and contextual processing from Google and Facebook, it's not hard to imagine a time where such comments could be eradicated from existence without ever reaching human eyes.
Tacking harassment is a massive, important challenge, in all forms, but in social media particularly. This is especially relevant when you consider the role social platforms now play in the interactive habits of young, and more impressionable, people. Suicide, it's worth noting, is the second leading cause of death for people aged between 10 and 24 in the U.S.
It's crucial that we do all we can to combat this issue.