Reddit has launched an interesting experiment which essentially provides advice and ratings on contributors to subreddits, enabling more context as to how the user has contributed in the past.
As you can see in this example, Reddit has launched a new test of what it’s calling ‘Mod Notes’, which enables moderators to create informational notes and tips that are then attached to a user profile for that community.
Here’s another look at the Mod Notes process, which also includes a full log of notes and mod actions applied to a user within a specific subreddit.
As explained by Reddit:
“The profile hovercard will be your home base for accessing Mod Notes and any moderator with Manage User permissions will be able to utilize it.”
So it’s not intended to be a public-facing info source, as such, but it’s designed to give moderators more context on each individual user, and their actions within the group.
Moderators are able to mark each subreddit member with one of five labels:
- Good Contributor
- Spam Watch
- Spam Warning
- Abuse Warning
That provides more ways to not only weed out potentially problematic contributors, but also to recognize the top members for their ongoing contributions.
It’s an interesting approach, which could provide another way for Reddit, and potentially other platforms that might look to adapt similar, to share more context about each user.
Twitter is actually trying something similar with its Birdwatch experiment, which enables users to leave notes on questionable tweets to provide more context. Earlier this month, Twitter took that project to the next stage, with some users now able to view Birdwatch notes from other users on certain tweets.
Reddit’s Mod Notes are, again, not public, so it’s not exactly the same, but it takes a similar approach in enabling users to add context and oversight to contributors, in order to weed out questionable posts and users.
It’d actually be interesting to see Reddit make these notes available to regular users, especially after, say, a certain threshold of reports. If moderators flag a contributor with several spam or abuse warnings, they should be either publicly highlighted, or the user should be removed, while the ‘Helpful’ and ‘Good Contributor’ tags could add more context to these users’ comments, and more weight to their insights and notes.
The fact that these are allocated by the mods themselves could serve as both an endorsement and incentive, giving users another way to get credit for their interactions.
And again, that could work on other platforms too. Aside from Twitter, Facebook’s also testing up and downvotes on group comments, which could also serve as a means to flag good and bad contributors, and add an element of crowdsourced monitoring to its systems.
There are abuse risks too, which is why platforms need to tread carefully (note: Reddit says the feature has been in development for months). But if you can allocate a group of trusted overseers, like Reddit mods, or group admins, who can then approve such tags, it could be a valuable tool in enhancing engagement.
Reddit’s Mod Notes are now available to moderators in the app.