Twitter Introduces New Measures to Combat On-Platform Harassment and Abuse
Way back in February, when Dick Costolo was still the man in charge at Twitter, the micro-blog giant announced that it was taking a stand against bullying and abuse on the platform.
“We suck at dealing with abuse and trolls on the platform”, Costolo said, “...and we’ve sucked at it for years”.
Costolo’s contention at the time was that the platform’s inability to get a handle on abuse was driving users away, which, given their focus on obtaining new users and generating more engagement via tweet, is now more pressing than ever.
Since that announcement, Twitter’s introduced a range of updates and changes aimed at curbing on-platform bullying and anti-social behavior, including:
- Introducing measures to streamline and simplify the process of reporting abusive tweets and making it easier to use Twitter interactions as evidence in order to report them to authorities
- Updating its Violent Threats Policy, giving Twitter more options for dealing with reported abusers, including the ability to freeze accounts, compel users to delete tweets and request personal identification in order to re-instate users
- Banning known trolls and taking action against repeated offenders – including the banning of controversial writer Chuck Johnson back in May
Online bullying and abuse is a major concern, and there are countless examples of the pain and suffering caused by such behaviors, so it’s great to see Twitter taking the lead and working to eliminate such actions on the platform. And now, Twitter’s taken another step, re-vamping the official ‘Twitter Rules’ to clarify what’s considered to be abusive behavior and hateful conduct on the platform.
Here’s what they’ve changed:
Cause and Effect
As noted in the official announcement:
“The updated language emphasizes that Twitter will not tolerate behavior intended to harass, intimidate, or use fear to silence another user’s voice. As always, we embrace and encourage diverse opinions and beliefs –but we will continue to take action on accounts that cross the line into abuse.”
Twitter’s rightfully taking this action very seriously, and as such, rather than a refinement or re-wording of the original policy, Twitter’s re-vamped their entire rules document to better reflect this emphasis and provide more clarification around what’s considered unacceptable activity.
In the previous version of Twitter’s rules, there was a section marked ‘Abuse and Spam’ which, really, was far more focused on the latter.
In the new documentation, ‘Abusive Behavior’ and ‘Spam’ now have their own, separate sections, with the ‘Abuse’ category now clearly spelled out and clarified in more detail.
Among the new additions to the policy is a new section titled ‘Hateful conduct’:
- Hateful conduct: You may not promote violence against or directly attack or threaten other people on the basis of race, ethnicity, national origin, sexual orientation, gender, gender identity, religious affiliation, age, disability, or disease. We also do not allow accounts whose primary purpose is inciting harm towards others on the basis of these categories.
As well as a new dot point under ‘Harassment’ which makes specific note of those who’re seeking to incite the attacks by other users:
Twitter’s also included a new section on ‘Self-Harm’ and how Twitter handles such reports.
- Self-harm: You may encounter someone considering suicide or self harm on Twitter. When we receive reports that a person is threatening suicide or self harm, we may take a number of steps to assist them, such as reaching out to that person expressing our concern and the concern of other users on Twitter or providing resources such as contact information for our mental health partners.
These new definitions and guide notes will give Twitter more power to act and address such behaviors and actions, and more scope to tackle a wider breadth of abuses and violations.
Further to these updates, Twitter has also spelled out how they can, and will, take action to address such violations.
“One of the areas we’ve found to be effective in this multi-layered strategy of fighting abuse is creating mandatory actions for suspected abusive behavior, such as email and phone verification, and user deletion of Tweets for violations. These measures curb abusive behavior by helping the community understand what is acceptable on our platform.”
As detailed in the image sequence, the violating user is initially locked out of her account for a defined period, then she's asked to provide a phone number to verify her identity. Once the user has confirmed that phone number by entering an SMS code, the user is then asked to delete the offending tweets before having her account re-activated. This is a good way for Twitter to ensure real people are behind such accounts, and to make those users take some responsibility by attaching a mobile number to their presence.
Cyberbullying and online abuse is a major concern. A study conducted by Pew Research in 2014 found that 40% of online adults had personally experienced some type of online harassment, a figure which jumps to 70% for users aged between 18 and 24. As online media, and social media platforms in particular, become a more crucial part of our interactive and connective DNA, so too so we see an increase in the risks associated with that reliance, and it’s important that we do all we can to address such issues and ensure we’re educating younger social media users that it’s not okay to single out others online, that abuse and bullying is not acceptable, in any form.
One of the greatest aspects of the modern age of social media and constant connectivity is that people don’t ever have to be alone. The ability to connect with a wide range of like-minded people around similar interests is greater than ever – people who, in times past, may have felt like outcasts can now find similarly-minded friends online and explore mutual interests, regardless of geographic limitations. Of course, the flip side to that is you’re never beyond reach – if you’re being targeted by bullies or trolls, the attacks can be never-ending, always just one click away, waiting for you to check-in. It’s important we recognize this new paradigm and that we do all we can to help those in need and provide tools to assist where possible.
In this sense, it’s great to see Twitter making this a focus and providing mechanisms to detect and eliminate negative behaviors.
Main image via Shutterstock
Webinars On Demand
June 15, 2016Building an effective goal-driven strategy, advanced campaign optimization, making sense of massive amounts of data from many channels — these a...
May 25, 2016Up to 80% of email databases are classified as inactive. These "sleepy subscribers" haven't engaged with your emails in months, which negativ...
February 05, 2016Facebook contests and campaigns are powerful ways for brands to engage with customers in social. They encourage social sharing, spur user-ge...
December 15, 2015New Research to Drive Smarter Social Strategy It’s no secret that social moves fast. So our research and analytics team mines social data,...