A connection of mine on Facebook is one of the key examples of far-right theorists. This connection is a fair bit older than me (in the 55+ age bracket), and every day, this person shares posts calling for the deportation of African youths, the banning of Muslim immigration, every day this person criticizes the inaction of the government leading to the current state of ‘crisis’ in western society.
If you were to read this person’s Facebook page, you would assume that we’re already in a war-like situation on this front - and to this person, we are. And other people support this, commenting, sharing, spewing racist hate. Further solidifying their unified, yet skewed perspective.
It’s concerning to see, and it points to the broader problems with Facebook and its capacity to fuel societal divides, but in my view, it’s not Facebook itself that has caused this. Looking at how this happens, it seemingly comes down to two key elements, and that the blame shouldn’t necessarily be leveled at Facebook, or social media more broadly.
The first key element is the ‘currency of clicks’. Back in the day, newspapers and TV news programs would publish a collection of news stories from the day, and they had no discernible way of knowing which individual stories within that collection were the most popular, specifically. This meant that the most accurate, the most trusted news outlets generally won out, with publishers driven by what we’ve come to know as journalistic integrity.
But digital distribution changed that.
With the advent of the web – where clicks reign supreme – news organizations can now track exactly how each specific story performs, and not only the story itself, but variations of it, changing the title, the tag line, switching the summary. All of these elements can now be individually tested, and that’s lead to a shift in the way publishers approach content.
This has pushed news coverage into a more divisive direction – a headline like ‘Man Shot Dead by Neighbor After Street Dispute’ is not going to get as many clicks as ‘White Man Shot Dead by Black Neighbor After Street Dispute’. ‘Crime Gang Identified by Police’ won’t be as successful as ‘African Crime Gang Identified by Police’, ‘White House Announces Support for Unified Military Administration of U.S. Satellites’ loses out to ‘Trump Announces Space Force’.
And as you can imagine, these latter variations not only draw more clicks, but more discussion as well, which leads to the second most significant factor in the increased divisiveness of online news coverage – algorithms.
Lead by Facebook, social platforms have become more and more reliant on algorithms to drive increased engagement. Those algorithms further incentivize divisive behavior from publishers, because they give increased reach to posts which see engagement - which, more often than not in a news sense, are those that spur debate.
As you can imagine, the comments on each of the above examples would be divisive, people taking sides, and those actions and comments then lead to them seeing more content from the same publisher/s, more posts on the same topics, commented on by the same people. That not only leads to the filter bubble effect of only seeing coverage from the outlets and people who reinforce your established viewpoint, but it also further solidifies divides by guiding you into that direction.
You can see this in The Washington Post’s highly cited ‘Red Feed, Blue Feed’ experiment, which shows how supporters on either side of US politics are further fueled by Facebook’s algorithm, which will show them more and more content that aligns with their view.
So now you have a tinderbox effect for societal division – people on each side are only seeing narrower perspectives, while publishers are indirectly incentivized to come up with argumentative headlines that drive more clicks, and fuel more division.
And further adding to this – new research shows that the vast majority of older social media users don’t understand why certain posts appear in their Facebook News Feeds.
You can see how, then, social media fuels division – but it’s not necessarily the medium itself, nor the platforms themselves, but rather the digital economy, and the implementation of sharing algorithms.
So what’s the answer then? Maybe, if social platforms removed algorithms, social media would be more democratic – users would see all posts in order again, and there would be less incentive for publishers to stoke such division. Maybe. The posts that get shared on social would still be those that spark emotional response, which means users would still share divisive content – but it’s possible that such perspectives would get less traction if the algorithms didn’t boost their reach, further amplifying such views.
Social media users these days are more aware of the process of unfollowing – maybe we should push for social platforms to remove algorithms altogether to lessen division, leaving it up to users to cull their feeds to ensure they see the posts of more importance to them.
Of course, the platforms not going to do that. Algorithms increase engagement, and the more active usage social platforms have - even from controversial users - the better off they’ll be.
Twitter CEO Jack Dorsey admitted as much earlier this week, ahead of his appearance before US Congress:
“From a simple business perspective and to serve the public, Twitter is incentivized to keep all voices on the platform”
More than just keeping them on the platform, social networks are incentivized to drive divisive debates, for the simple reason that they boost active usage.
So now you have a situation where many users are seeing such content, and they don’t understand that it’s being shown to them via an algorithm aligned to their interests. They, therefore, believe that this is the truth, this is the news of the day, and they get shown more and more on the same topic, reinforcing their perspective that the world outside their homes is falling apart due to these issues.
Which, you could argue, is not so bad - these keyboard jockeys can debate amongst themselves all they want, drastically uninformed about the reality of such situations. And that may be true, until it comes time to vote.
Various polls show that older voters, in particular, are leaning further to the right side of politics - the same group that has less understanding of social platform algorithms, while more and more people are also getting more of their news coverage from social networks.
Based on this, and with representatives from Twitter and Facebook going before Congress to provide assurance that they’ve cleaned up their platforms ahead of the upcoming mid-terms, a case could be made that algorithms should be eliminated, and that a possible re-casting of the incentives behind online publishing needs to be examined.
The answers on both fronts are likely no easier than regulation of social platforms more broadly, but the problems, in my perspective, stem from these key elements, not from the use of social platforms themselves.