Twitter Announces New Trust & Safety Council to Help Tackle On-Platform Abuse
As you may have read over the weekend, the hashtag #RIPTwitter was trending on Twitter in response to a report from BuzzFeed which indicated that Twitter was looking to introduce a new, algorithm defined timeline at some stage this week - a move that could, potentially, fundamentally alter the Twitter experience as we know it.
Twitter users were not happy, and they took to the platform to voice their opposition to the reported change, with millions of tweets sent over the weekend using the #RIPTwitter tag. The noise was so intense that Twitter CEO Jack Dorsey eventually felt compelled to respond and allay fears of any such change – whether that’s because there was no change planned or whether the intensity of the opposition to it changed Twitter’s mind, we don't know.
But in amongst the various questions and criticisms, there was one tweet exchange, in particular, that stood out. While a great many people had a great many things to say about Twitter’s possible update, one users’ tweets took on additional significance and, inadvertently, became a critical example in highlighting another cause which Twitter has struggled to get a handle on.
Those tweets came from a Senior Software Engineer at Twitter named Brandon Carpenter.
Carpenter, in seeing the rising tide of angry tweets about a possible Twitter algorithm, decided to step in and urge a level of calm among users.
Seriously people. We aren't idiots. Quit speculating about how we're going to "ruin Twitter"— Brandon Carpenter (@bhcarpenter) February 6, 2016
Evidently, Carpenter’s intuition didn’t serve him well.
Wow people on Twitter are mean— Brandon Carpenter (@bhcarpenter) February 6, 2016
In shock a bit.— Brandon Carpenter (@bhcarpenter) February 6, 2016
@mhluongo Let's just say I learned a few things today.— Brandon Carpenter (@bhcarpenter) February 6, 2016
Carpenter was merely looking to add balance to the debate and reassure users that Twitter is listening and that they are aware of the impacts of their actions – and that those actions are measured and implemented accordingly. But suddenly, and without even meaning it, Carpenter became representative of the scourge of on-platform abuse that plagues many a Twitter user. Carpenter became the perfect example – even a Twitter engineer, not seeking to incite anger or fuel debate, can become a target for abuse and ridicule, can be attacked via tweet for his opinions. This is what Twitter security and safety advocates have been preaching about for years, and a representative of Twitter itself was now experiencing the platform’s full-force.
Without intending to do so, Carpenter became a symbol of all that’s wrong and that needs to be addressed on the platform, in terms of negative attention and trolls. Regardless of algorithms and advertising and emoji and Moments, Twitter’s biggest problem, to many, is on-platform abuse, and Twitter’s never been very good at tackling that issue – even former CEO Dick Costolo himself admitted: “we suck at dealing with abuse”.
In fairness, Twitter has been putting more effort into stamping out abuse and anti-social behavior on the platform. Exactly one year ago this week, Twitter, in response to Costolo’s blunt assessment, vowed to undertake change and began doing so only a month later with the unveiling of new tools to simplify the process of using tweets as evidence when reporting concerns to authorities. A month after that, Twitter re-worded their policies which deal with threats in order to cover more types of abuse, and they followed that up with the banning of several prominent trolls, including well known internet commentator Chuck Johnson. Such changes proved Twitter was taking the issue seriously, but on-platform abuse remained - and still remains - an issue, largely because policing the content of half a billion tweets, every day, is a monumental task, and one which, realistically, Twitter cannot tackle on its own.
Given this, Twitter has this week announced the creation of a new ‘Trust & Safety Council’, a collection of members from more than 40 organizations, as well as experts from 13 regions, who’ll assist Twitter in the development of products, policies, and programs to combat negative behaviors and help ensure that people feel safe expressing themselves on the platform.
“Our Trust & Safety Council will help us tap into the expertise and input of organizations at the intersection of these issues more efficiently and quickly. In developing the Council, we are taking a global and inclusive approach so that we can hear a diversity of voices from organizations including:
- Safety advocates, academics, and researchers focused on minors, media literacy, digital citizenship, and efforts around greater compassion and empathy on the Internet;
- Grassroots advocacy organizations that rely on Twitter to build movements and momentum;
- Community groups with an acute need to prevent abuse, harassment, and bullying, as well as mental health and suicide prevention.”
Among the groups taking part in the new initiative are Beyond Blue, Bravehearts, Crisis Text Line, Family Online Safety Institute, GLAAD, Hollaback, ICT Watch, National Cyber Security Alliance, National Domestic Violence Hotline and NetSafe.
It’s an important initiative and it’s great to see such a large range of organizations represented, which will help Twitter develop more inclusive and intelligent policies relative to each, specific case, whilst also providing a wider range of research and expert ideas on how to combat such problems.
A Crucial Concern
While it may not get the focus of new product launches and visual additions, on-platform safety is a major issue, and one which is no doubt a significant contributor to Twitter’s growth and it’s appeal to new users moving forward. Indeed, in research we conducted last year, the numbers indicated that Twitter abuse is very prominent and regularly reaches extremely troubling levels.
That evidence, along with the many reports of related cases - including high profile examples like that of Robin Williams’ daughter Zelda, who became a target for trolls following her father’s untimely death – further underlines the levels we’re talking about, the reach and potential psychological damage that can be caused by simply sending a tweet.
Such behavior cannot, and should not be tolerated, and we at Social Media Today are strongly behind Twitter’s push to extend their capacity in this regard, and to work with this new group of organizations to develop new solutions and new systems to help tackle and eliminate on-platform abuse. We all, every one of us, know someone who has been impacted by bullying and/or other forms of abuse. It’s important that we take a stand, where possible, to ensure such attacks are not tolerated and that we, as a progressive society, look to protect and help those in need and defend freedom of expression for all.
If you witness any type of abusive or concerning behavior, please report it via Twitter’s Safety Center
Main image via Shutterstock
Webinars On Demand
June 15, 2016Building an effective goal-driven strategy, advanced campaign optimization, making sense of massive amounts of data from many channels — these a...
May 25, 2016Up to 80% of email databases are classified as inactive. These "sleepy subscribers" haven't engaged with your emails in months, which negativ...
February 05, 2016Facebook contests and campaigns are powerful ways for brands to engage with customers in social. They encourage social sharing, spur user-ge...
December 15, 2015New Research to Drive Smarter Social Strategy It’s no secret that social moves fast. So our research and analytics team mines social data,...