After the option was first spotted in testing last month, Twitter has now confirmed that it will soon enable users to hide tweet replies as part of its ongoing efforts to improve on-platform discussion and keep users safe.
The option, uncovered by reverse engineering expert Jane Manchun Wong, would enable users to hide any reply of their choosing on their tweets, as you can see in this example.

Twitter has confirmed that it will be 'experimenting' with the option from June, but it's not clear if that will be an official feature roll-out or a test in the twttr beta app or other.
The announcement was included in the notes in a new update from Twitter regarding its ongoing efforts to reduce spam and abuse, and better protect users from the same.
And those efforts are producing results - according to Twitter:
- 38% of abusive content that’s enforced is now surfaced proactively to our teams for review, instead of relying on reports from people on Twitter.
- 100,000 accounts have been suspended for creating new accounts after a suspension during January-March 2019 - a 45% increase from the same time last year.
- We're now responding 60% faster to appeal requests with our new in-app appeal process.
- 3x more abusive accounts are being suspended within 24 hours after a report compared to the same time last year.
- 2.5x more private information is being removed with a new, easier reporting process.
On the first stat, Twitter says that its automatic detection systems are now getting much better at alerting its teams to potential rule violations before users have even reported them.
"People who don’t feel safe on Twitter shouldn’t be burdened to report abuse to us. Previously, we only reviewed potentially abusive Tweets if they were reported to us. We know that’s not acceptable, so earlier this year we made it a priority to take a proactive approach to abuse in addition to relying on people’s reports."
That's what's lead to the 38% increase in potentially abusive content being flagged for review without user reporting - which is great, and its something that's been long overdue on Twitter. But it does also mean more work for Twitter's moderation teams, which is not exactly the most pleasant or rewarding vocation.
Human moderation is also a finite resource, not something Twitter can easily scale without significant expense. Indeed, while Facebook now has more than 30,000 employees working on its safety and security team (around half being content reviewers), Twitter has less than 4,000 employees in total. That figure may not include external contractors, but it's fairly safe to assume that Twitter's moderation team does not exceed the size of its overall headcount.
Twitter is, of course, much smaller than Facebook, in terms of overall audience size (Twitter's 321m MAU equates to around 14% of Facebook's equivalent count), but still, that's a lot of reports to deal with. Not only is it good that it's automated detection systems are improving, Twitter needs to ensure its machine learning tools are doing a lot of heavy lifting in order to even come close to protecting users at any significant rate.
Depending on your perspective, Twitter is either improving, or it's still not doing enough in this regard. The immediate response, as you can imagine, to these figures has been many users calling for Twitter to 'ban the Nazis', a request that's now as common as 'enable tweet editing' in the list of user requests. Just this week, new calls were made to ban US President Donald Trump from the platform for sharing content that, at least on the surface, would appear to violate hate speech laws.
Twitter has repeatedly been forced to defend its position on Trump, and other prominent users, in regards to the content of their tweets and the role they play in public discourse - even if they break the platform's rules. Indeed, Twitter's now considering a new way to label tweets like this, which technically break its rules, but are not being removed because they serve an alternate purpose.
That's a difficult stance to take. Having a unique set of rules for prominent figures blurs the lines of what's acceptable on the platform, which opens the door to confusion, and frustration, when others are punished for the same.
As such, while it's good to see Twitter doing more to enforce its rules, and add in new features to help users better control what they see on the platform, it still feels confused, Twitter still seems unclear, internally, on what its overall mission is, and the rules it puts in place to align with such.
Another case in point - Twitter's profile verification process, currently paused due to internal confusion as to the criteria it applies to approvals, has actually approved more than 10,000 users in recent months, despite being closed to the public.
The latest stats are good for Twitter, it's angling this as a good news story - which, definitely, helping rid the platform of such content is. But it still has a range of issues to deal with, particularly in regards to its internal processes and workings on the same.