Instagram is implementing new measures that will proactively limit the reach of feed posts and stories which ‘likely’ violate its rules around hate speech, bullying and the incitement of violence, as part of its expanding efforts to reduce game and user risk in the app.
As explained by Instagram:
“Previously, we’ve focused on showing posts lower on Feed and Stories if they contain misinformation as identified by independent fact-checkers, or if they are shared from accounts that have repeatedly shared misinformation in the past. Today, we’re announcing some changes to take this effort even further. If our systems detect that a post may contain bullying, hate speech or may incite violence, we’ll show it lower on Feeds and Stories of that person’s followers.”
So how will Instagram determine whether non-reported posts might contain these elements?
“To understand if something may break our rules, we'll look at things like if a caption is similar to a caption that previously broke our rules.”
Instagram further notes that if its systems predict that an individual user is likely to report a post, based on their past history of reporting content, it will also show that post lower in their personal feed.
Which seems pretty foolproof, right? There’ll be no new influx of ‘shadow ban’ reports or similar as a result of IG putting more reliance on machine learning to determine post reach.
Yeah, it could be somewhat problematic, and considering the efforts Instagram has gone to in the past to explain away shadow bans, it’s seems inevitable that this will lead to more accusations of censorship, bias and other criticisms of the platform as a result of this shift.
Which is probably not such a bad payoff, if it works. In theory, this could be another key step towards limiting the spread of bullying and hate speech, both of which have no place in any public forum, and no right to amplification and broadcast via social apps. Instagram is also under pressure to improve its efforts in protecting young users from bullying and abuse, after the Facebook Files leak last year suggested that parent company Meta had ignored research which showed that Instagram can have harmful mental health impacts for teens.
Anything that can be done to stop the spread of such is, at the least, worth an experiment, while Instagram also notes that it has previously avoided implementing automated systems of this type because it wanted to ensure that its technology ‘could be as accurate as possible’ in detection.
Which suggests that it now has the required level of confidence in its processes to ensure good results. So while there will undoubtedly be more reports of mistakes, and more accusations of overreach, invoking some amendment in the constitution (always incorrect), if it works, and reduces instances of harm and mental anguish due to bullying and hate speech, it will be entirely worth it.