Twitter's trying another way to combat misinformation, with a new, manual reporting option that will enable users to flag tweets that contain potentially misleading claims.
We’re testing a feature for you to report Tweets that seem misleading - as you see them. Starting today, some people in the US, South Korea, and Australia will find the option to flag a Tweet as “It’s misleading” after clicking on Report Tweet.— Twitter Safety (@TwitterSafety) August 17, 2021
The new option, available to some users, adds an additional 'It's misleading' option to your tweet reporting tools, providing another means to flag concerning claims.
Tap on that and you then have the capacity to flag the tweet in question under 'Politics', 'Health' or 'Something else', giving users more capacity to address misinformation on the platform.
Which could become a new avenue for misuse, or for reporting tweets simply because they run counter to your opinion. But Twitter further notes that reporting tweets via this new process won't necessarily see its moderators taking action, as such, on every report.
"We may not take action on and cannot respond to each report in the experiment, but your input will help us identify trends so that we can improve the speed and scale of our broader misinformation work."
As you can see in the above reporting example, Twitter also provides this explainer in the process:
"Although we may not take action on this report or respond to you directly, we will use this report to develop new ways to reduce misleading info. This could include limiting its visibility, providing additional context, and creating new policies. Thank you for helping to make Twitter better for everyone."
So the idea is not to stamp out specific tweets based on each users' report. But if 100, or 1,000 people report the same tweet for 'political misinformation', for example, that'll likely get Twitter's attention, while it might also help Twitter identify what users don't want to see, and want the platform to take action against, to help improve the tweet process.
Of course, that could also lend it to brigading, or coordinated mass-reporting of specified tweets to get them removed, even if they don't break the rules. But again, Twitter's not saying it will remove, or even action tweets based on these flags, it may just be looking for trends, like brigading, in order to determine what types of tweets trigger such response, and how it can then address such in future, potentially.
Which is similar, in some ways, to its 'Birdwatch' initiatlve, which enables users to add additional notes to tweets that include questionable claims, which will then, eventually, be viewable on those tweets, if users are seeking further context.
???? Today we’re introducing @Birdwatch, a community-driven approach to addressing misleading information. And we want your help. (1/3) pic.twitter.com/aYJILZ7iKB— Twitter Support (@TwitterSupport) January 25, 2021
Birdwatch notes will also, ideally, help Twitter identify highly questionable tweets, in a similar process to this new reporting option, with a high volume of notes helping to flag the most concerning issues, and then enable further action based on audience activity.
So it's like crowdsourced moderation, to a degree, while it also addresses a key concern that many Twitter users have raised in the reporting process, by adding a specific 'misleading' option into the reporting flow.
Will it have a big impact? Well, not initially at least. As Twitter notes, it's starting out small, with selected users in the US, South Korea, and Australia getting access to the option. From there, Twitter will learn what works, and what doesn't, and determine if this helps to highlight elements of concern, and improve its response to such.
Anything that can help slow the spread of false claims is, at the least, worth an experiment, and while Twitter isn't looking to take direct enforcement action based on these tags, it could provide another, valuable opportunity to further improve its processes.