After first launching a limited test of warning prompts on tweet replies containing potentially offensive remarks in May last year, Twitter is now re-launching the alerts, with a new format, and further explanation as to why each reply has been flagged.
As you can see here, the new alerts explain that Twitter is 'asking people to review replies with potentially harmful or offensive language'. You then have three large buttons below the prompt - you can either tweet it anyway, edit your response, or bin it instead.
The format has changed significantly since the original launch, which was a far more basic prompt.
Twitter updated the format in August, before shelving the test during the peak of the US election campaign. But now, with the chaos of the period behind us, Twitter's trying out the prompts once again, with users on iOS now set to get the alerts if Twitter detects any potentially offensive terms in their replies.
The system is similar to Instagram's automated alerts on potentially offensive comments, which it released in July 2019.
Those prompts have helped reduce one of the key causes of friction in online interaction - which is not intentional offense, but misinterpretation.
Last year, Facebook published a research report which found that misinterpretation was a key element in causing more angst and argument among Facebook users.
Given this, by prompting users to simply re-assess their language, many online disputes could likely be avoided entirely - and that's likely even more true on Twitter, where condensing your thoughts into 280 characters can sometimes lead to unintended messaging.
Twitter has seen success with other prompts that cause users to take a moment to re-assess what they're tweeting.
Twitter's similar alerts on articles that users attempt to retweet without actually opening the article link have lead to people opening articles 40% more often after seeing the prompt, and reduced blind retweets significantly. Twitter has also tried other variations of the same, including alerts on content disputed by fact-checkers.
Simple, additional steps like this can give users a moment of pause to re-think their actions, and that may be all that's needed to reduce unnecessary aggression.
It's one of the various measures that Twitter's trying as it continues to focus on improving platform health, and ensuring more positive interactions within the app. Those efforts have seen Twitter drive significant improvements over the last few years. It still has a way to go on this front, but it's good to see the platform continuing to test out new ideas.