If there was any doubt over the influence of social media, the results of the 2016 US Presidential Election proved the medium's importance. What was once seen as a fad, a plaything for kids, is now a legitimate news source, and an increasingly relevant platform for news and media insights. Indeed, Pew Research reported earlier this year that some 62% of Americans now get news updates from social media sources - yet even with that knowledge, no one seemed to take those figures seriously until Donald Trump became the President-elect.
Since then we've seen a flood of coverage about the dangers of social media as a news source, with Facebook in particular coming under scrutiny for their outsized influence over public sentiment.
It's quite a contrast from reports like this one, published by The Wall Street Journal in 2014, which stats that:
"Regardless of the hype surrounding social media, consumers are still most affected by their offline experiences."
The sheer volume of coverage about Facebook's influence shows that this is now largely accepted, that social platforms are indeed highly influential, and becoming moreso. Denying this flies in the face of statistical evidence - you personally may not use Facebook as a news source, you may not believe you're impacted by the echo-chamber effect. But the available data suggests a great many people are, and that's something that cannot be ignored.
But along with the increased focus on social media's influence, the platforms themselves are now being forced to re-assess their processes and examine the role they've played in such events in order to ascertain whether there are problems in the way they supply news content that could lead to an imbalance in perspective.
That shift, while in its early stages, could be significant - from a user perspective definitely, but from a marketing standpoint also.
Here are a few examples of the initial changes and shifts we've seen from some of the major social platforms which could indicate significant areas of change to watch.
As noted, Facebook, with the most users of any social platform, has logically come under the most scrutiny in the post-election wash-up.
We've written about this before - the problem with Facebook is that what works for increasing user engagement doesn't necessarily work for ensuring balanced coverage. Facebook wants users to stick around, to spend as much time on-platform as possible, and a key way of doing that is by showing them more of what they want to see.
The potential complications of this are quite obvious - if you're a Trump supporter and you're seeing five articles a day in your News Feed which talk about how great Hillary Clinton is, you're less likely to stick around and read more posts. But if you're shown more articles that support your pre-existing viewpoint, you'll read more, you'll 'Like' more, and you'll come back to Facebook more often to see more of that content.
Over time, however, this obviously presents a problem - when you're eliminating all dissenting views from your sphere of consciousness, you're not making informed decisions. More than that, you're not able to make an informed choice as you're simply not being presented with the required data. This, of course, is not to say either side is right or wrong, but if you're only seeing one side of the debate, you're really not being given much choice.
Initially, Facebook CEO Mark Zuckerberg played down the impact of fake news, specifically (though not the echo chamber effect), but since then Facebook has outlined various ways in which they're looking to tackle the problem. Make no mistake, Facebook is taking the issue seriously - the negative blowback has already seen Facebook's share price take a hit, while philosophically, the suggestion that Facebook could be causing more societal division goes against Zuckerberg's wider mission.
So what might we see happen at Facebook as a result of these efforts?
Last week we saw reports that Facebook's working on a Snapchat Discover-like system which would give publishers their own dedicated space within the News Feed. Reportedly called 'Collections', the new option would provide a new way to consume news, which would likely be algorithm-free, with contributing publishers able to promote their content as they see fit within that space.
If it works, that could see Facebook reduce the reach of news content within the main News Feed, forcing more users to refer to Collections for news content instead. People would still be able to Like and Share such posts, but Facebook could easily reduce the reach of those actions on news content as well - and worth noting, a recent post on The Monday Note pointed out that news content only represents around 10% of the average content shown to each Facebook user in their News Feed. Facebook could remove this without any significant side effects, in terms of lost engagement.
If Facebook were to look at reducing the reach of news content, that could have significant impacts for marketers, as any change to the algorithm generally does. A key element would be in how they might categorize news providers - Facebook could tab all Pages marked in the 'News/Media' category, for example, and/or others, and those would be the ones directly impacted by such a change. Facebook would likely never reveal this - if they noted that all Pages in a certain category would be penalized, everyone would just switch to another definition, but the true impact of any such change would rest on the 'how', on what Facebook decides is the way to reduce the impact of news bias or fake content.
On fake content specifically, The Verge has reported that Facebook's filed a patent for a system that can automatically detect content relating to pornography, hate speech or bullying. As Verge writer Casey Newton points out, such a system could also, potentially be attuned to detect fake news as well.
Newton highlights several variables Facebook could use in this process, including profile verification (i.e. if reports of an item being fake are coming from reputable users), report volume and profile age.
As Newton notes, Zuckerberg himself has highlighted the need for "better technical systems to detect what people will flag as false before they do it themselves." This option could be the way forward.
Also of note on this, Facebook's working with Microsoft, Twitter and YouTube in an effort to help curb the spread of terrorist content online, combining their resources to come up with better detection tools and systems.
The impact on users will, ideally, be positive, but marketers will need to keep a close eye on Facebook's movements to understand how any such changes could affect them.
Twitter, in CEO Jack Dorsey's own words, has a 'complicated' relationship with President-elect Donald Trump and the role the platform has played, and is playing, in the spread of information.
As Dorsey noted in a recent interview with Recode:
"...having the president-elect on our service - using it as a direct line of communication - allows everyone to see what's on his mind in the moment. I think that's interesting. I think it's fascinating. I haven't seen that before. We're definitely entering a new world where everything is on the surface and we can all see it in real time and we can have conversations about it. Where does that go? I'm not really sure. But it's definitely been fascinating to learn from."
Dorsey's dancing around the fact that many have called for Trump to be banned for violating the platforms Terms of Service, which Twitter says it will do if they have to - though it puts them in a very awkward situation. How do you ban the President from your platform?
In any case, Twitter, like Facebook, has also been forced to re-assess their processes, which has thus far lead to the banning of several far-right commentators in the wake of the election, as well as a new crackdown on bots, which reportedly played a significant role in spreading Donald Trump's key messages during the campaign (and for balance, there were pro-Clinton bots too).
For Twitter, policing the spread of news is much more difficult, as the network is defined by its real-time nature, so it's fortunate that they haven't come under the same scrutiny Facebook has in this regard (though Twitter's algorithm is far less influential than Facebook's).
In terms of how these changes could impact on users and marketers, I wouldn't expect there to be any significant changes, though the banning of users could have flow-on impacts, one way or another.
For example, there are already reports that new social networks are seeing big boosts in traffic as a result of banned Twitter users switching across, which could cause a reduction in Twitter's audience, potentially. The same would be true if Twitter was forced to ban Donald Trump - Trump would still likely want an outlet for his thoughts and could take his large audience to another platform, which might be a negative for Twitter.
On the other hand, banning controversial users could also galvanize the Twitter community and work to change the perceptions of the network being riddled with harassment and abuse. If Twitter took a stand, maybe more people would see that as a positive and subsequently tweet more often - it could even bring more users back to the platform.
It's a risk for Twitter either way, and you can see the balancing act they have to maintain, particularly amidst ongoing market pressure.
The removal of bots, too, if Twitter pushes hard with this, could see many users lose audience - not a major problem if you've built a genuine following, but could also have flow-on impacts dependent on the result.
While Reddit is still less of a concern for marketers, the platform does play a significant role in the spread of information online, with highly active and dedicated Subreddits on almost any topic you can think of.
Late last month, Reddit CEO Steve Huffman admitted that he had been modifying the posts of some users on the most visible Donald Trump-supporting community after they had repeatedly slung verbal abuse in his direction.
In a post, Huffman apologized for the incident, saying that:
"It's been a long week here trying to unwind the /r/cuckold stuff. As much as we try to maintain a good relationship with you all, it does get old getting called a loser by my wife's boyfriend. As the mod of /r/cuckold, I shouldn't play such games, and it's all fixed now. Our community bull is pretty pissed at me, so I most assuredly won't do this again"
Shortly after this, Reddit announced that they would be implementing new punishments, including warnings, timeouts and permanent bans, for the platform's most abusive trolls. And this week, Reddit has announced even more measures on this front, overhauling their scoring algorithm in order to crack down on bots and vote brigades which work in collusion to push conversations and topics higher than they should appear.
As noted, Reddit doesn't have a huge impact from a marketing perspective, but the flow-on implications of such changes will still be felt. Reddit is often the source of breaking news and trends, and it's free speech credo is a big part of the platform's success. Changing the rules could change the process, which could push more users to other platforms, either giving Facebook and Twitter more challenges to deal with or helping fuel the growth of a newer player.
It may not matter too much to you now, but it'll be interesting to see how any such changes do have an impact, and how the online community responds.
It's interesting to consider the possibilities of what will happen to social media and each platform in the wake of the election. But one element that's completely clear is that social media networks do play a significant role in how we access and consume information. The realization of the influence has lead to an important discussion, one that's worth tracking in the coming months to maintain an understanding of how the communications landscape is evolving.