Following another spate of abuse aimed at UK soccer players in the wake of the recent Euro 2020 championship final, Instagram has announced a new set of options to help people - specifically high profile users - manage their interactions within the app, and avoid offensive comments and messages directed their way.
Instagram has rolled out several updates in response to similar incidents, including harsher penalties for those sending abuse via DM, and the capacity for personal accounts to switch off DMs completely from people that they don’t follow.
Now Instagram's adding even more to this - first off, Instagram's expanding the roll-out of its new 'Limits' option, which it started testing last month. Limits enables users to temporarily limit unwanted comments and messages from selected groups in the app.

As explained by Instagram:
"We developed this feature because we heard that creators and public figures sometimes experience sudden spikes of comments and DM requests from people they don’t know. In many cases this is an outpouring of support - like if they go viral after winning an Olympic medal. But sometimes it can also mean an influx of unwanted comments or messages. Now, if you’re going through that - or think you may be about to - you can turn on Limits and avoid it."
Through Limits, Instagram will recommend groups of accounts that you may want to restrict, based on detected activity, which then enables users to hide interactions from these profiles unless they manually choose to see them.
Instagram says that most of the negativity aimed at public figures in the app comes from people who don’t actually follow them, or who have only recently followed them, and who simply pile on in the moment. Limits aims to combat this, and could be a big help for those in the public eye, particularly amid high-profile incidents. Twitter is also exploring similar, with the capacity stop other users from @mentioning your profile for a set period of time.
Limits will be made available to everyone on Instagram, globally, from today.
"Go to your privacy settings to turn it on, or off, whenever you want. We’re also exploring ways to detect when you may be experiencing a spike in comments and DMs, so we can prompt you to turn on Limits."
Instagram's also looking to improve its warnings on comments that may be deemed offensive.
Right now, Instagram displays a warning message when a user tries to post a potentially offensive comment, based on automated detection of certain terms and phrases within the comment field. If that same user tries to post offensive comments multiple times, Instagram will then display an even stronger warning, reiterating the potential penalties for on-platform abuse.
In order to improve response to these alerts, Instagram says that it's now going to display its stronger message the first time around, which could further dissuade people from leaving such remarks.

As you can see here, these harsher Instagram warnings specifically note that your account could be deleted as a result, which Instagram has found does help in getting users to reconsider their approach.
"For example, in the last week we showed warnings about a million times per day on average to people when they were making comments that were potentially offensive. Of these, about 50% of the time the comment was edited or deleted by the user based on these warnings."
Even the slightest bit of friction within the posting process can prompt people to re-think their comments, and the specific note of account deletion can act as a strong deterrent, which may help to further reduce instances of on-platform abuse.
And finally, Instagram is also rolling out its 'Hidden Words' feature for DM requests to all users this month, after testing it over the past few months.

The new option enables users to automatically filter DM requests which include potentially offensive terms, phrases and emojis, which are then re-routed into a 'Hidden' folder, which users then have the option of viewing, or not.
Instagram says that it's also expanded its list of potentially offensive terms and emojis that will be filtered via this option, which will continue to be reviewed and updated over time.
That could further help to shield users - particularly high-profile ones - from having to see such comments, and while the option to view them will always be there, removing them from immediate view could have a big impact.
The abuse that UK soccer stars have seen on the platform is abhorrent, and acts as a sad reminder of the state of the world, and the fact that we still have a long way to go in addressing inherent bias and facilitating true equality. It's also a reminder of the negative impacts of social media connection. Now, everybody, no matter how offensive their personal beliefs and stances might be, has the opportunity to amplify their thoughts to thousands, if not millions of people, via these platforms.
The promise of social media is that it gives everybody a voice, a means to be heard - but with that, we have to also accept that some opinions, some perspectives, don't deserve that opportunity. Further debate can be had around who decides such, but clearly, these cases on Instagram highlight the importance of having at least some level of control over speech amplification, and the option for platforms to revoke the freedom to be heard in some cases.
That's always been a difficult balance, with the platforms themselves preferring to let their users decide the parameters of acceptability. But clearly, society, in general, has a way to go in maintaining civility and acceptance for all people.
That's a people problem, not a platform one - but social platforms are under no obligation to provide unfettered access to their audiences if they choose not to.