I've been debating whether to weigh in on the Facebook fake news issue and role the platform played in the 2016 US Presidential Election. For one, it feels like most people are largely fatigued by Trump v Clinton and need a bit of a break to take it all in and assess where we're at, and Facebook's involvement has also been so heavily covered on other platforms that I wasn't sure it needed re-iterating here.
But readers have asked about it and why we've not covered it, so here are three points that I think are relevant to the debate and the lasting implications of the 2016 US Presidential Election result.
1. Facebook Absolutely Influenced the Vote
To deny this is illogical - Facebook now plays an integral role in the dissemination and distribution of news, and as such, it also plays a role in how people are informed, thereby influencing what they think. Add to this the fact that Facebook's News Feed is fuelled by an algorithm which provides users with more content similar to that which they're interacting with, and the link is fairly obvious.
Really, it's not even up for debate - as has been quoted elsewhere, research shows that 67% of Americans use Facebook, and two-thirds of them get news there, equating to around 44% of the entire population who get at least some of their news from the site.
That number's likely even higher now - this research was conducted by Pew Research back in January and Facebook's North American user base has grown by seven million since then. It's fair to assume that a significant amount of those new users are also now getting news updates from the site.
Contrast that against the declines in overall TV consumption and newspaper readership and the impact is obvious - one way or another, Facebook's rising prominence as a media source means the platform absolutely wields significant influence over news delivery, which, in turn, impacts audience behavior - but the case has actually been proven even more definitively than that.
Back in 2010, around 340,000 extra voters turned out to take part in the US Congressional elections because of a single election-day Facebook message. This is based on research co-authored by Facebook, in which they proclaimed that it was:
"...the first [study] to demonstrate that the online world can affect a significant real-world behaviour on a large scale."
The process they used to prove this was relatively simple:
"About 611,000 users (1%) received an 'informational message' at the top of their news feeds, which encouraged them to vote, provided a link to information on local polling places and included a clickable 'I voted' button and a counter of Facebook users who had clicked it. About 60 million users (98%) received a 'social message', which included the same elements but also showed the profile pictures of up to six randomly selected Facebook friends who had clicked the 'I voted' button. The remaining 1% of users were assigned to a control group that received no message."
The results showed that users who received the 'informational' message (the top message in the above screenshot) voted at the same rate as those who saw no message at all, while those who saw the 'social' message - with images of their friends included (lower example in above screenshot) were 2% more likely to click the 'I voted' button and 0.4% more likely to head to the polls than the other group. Based on this, researchers estimated that the social message directly increased voter turnout by 60,000 votes, while a further 280,000 people were "indirectly nudged to the polls" by seeing messages in their News Feeds - "notifications that their friends had voted".
This is direct influence on an election via Facebook content. It's not even a hypothetical, this research - again, co-authored by Facebook - proves it.
Wanna' find out if Facebook thinks you're more Liberal or Conservative based on your on-platform actions? You can do that too - go to www.facebook.com/ads/preferences, click on the "Lifestyle and culture" tab under the "Interests" section, then look for a box in that section titled "US politics".
Again (and as noted in the image), this is based on your on-platform actions, Facebook is well aware of users' political leanings and has been using it as a targeting option, so this factor definitely does have some influence on the content you see. These elements all point to the fact that Facebook absolutely influences political opinion and how people subsequently vote - and even on a basic level, on your own usage of the platform, you know this.
For example, let's say you're going through your News Feed and you're a Donald Trump supporter and there's a friend of yours who's posting a heap of links to articles about how bad Trump is and how Hillary is great. He gets annoying. You unfollow him. Granted, this is within the control of the user, not the platform, but at best, Facebook facilitates the expansion of political echo-chambers by making it easier than ever to filter out anything that doesn't align with your perspective - while the News Feed algorithm also uses those actions as signals in order to serve you more personally relevant content, and we know that Facebook can infer political leaning based on such actions.
The News Feed shows you more of what you engage with, which is likely what you agree with, and less of what you don't. By design, its reinforcement theory in practice - but the question then is whether that influence was enough to change the vote.
2. Eliminating Fake News Probably Won't Provide the Solution
Facebook CEO Mark Zuckerberg has dismissed the notion that fake or misleading content could have influenced the election result, noting that it only amounts to around 1% of content on the network. That seems like an underestimation given the evidence.
For example, BuzzFeed recently published the results of their investigation which found that in the final three months of the US presidential campaign, "the top-performing fake election news stories on Facebook generated more engagement than the top stories from major news outlets such as the New York Times, Washington Post, Huffington Post, NBC News, and others"
Now that's not all of the content on Facebook, it's only a subset as sampled by BuzzFeed - but still there's a clear trend from that May-July period. So what happened in that space of time that could have had an impact.
This seems to correlate with previous studies - as noted in the 2010 election experiment, people were influenced to act when shown the 'social' message, the image with pictures of your friends on it, which enhanced the peer pressure of the notification. The jump in fake news shares is based on the same principle - Facebook's algorithm change put increased emphasis on what your friends were sharing ahead of Page posts, adding to the wider reach, and also the implied peer pressure, of those viewpoints. If your friends were engaging and sharing more of this content, you were bound to see more of it - combine that with increased discussion around the election and the theory makes sense. The spread of fake news increases, more people are influenced by those reports, the outcome is delivered.
But even if those fake reports were reaching a wider audience, were they actually able to influence the way people voted?
This element is much more difficult to prove. It's clear now that fake news ran rife throughout the election period - even Facebook insiders have (reportedly) admitted this. Stories like the Pope endorsing Donald Trump, Bill Clinton being charged with child rape, the murder of someone who was set to testify against Hillary Clinton - all of these falsehoods gained traction online and likely solidified people's support one way or another, but these tales only gained traction at all because of the bigger concern with Facebook's news feed, that being that it actively works to deliver more content which supports your existing beliefs, while banishing alternate perspectives from view. From that perspective, it seems more likely that fake reports merely reinforced people's pre-existing opinions as opposed to shaping them - but that discounts the huge amount of undecided voters who, given the final margin, would have influenced the result.
Going on the research quoted above, it would suggest that what your friends share is definitely able to inspire real world action, and with more fake stories being shared, the influence seems clear.
But even if that was the case, it's not fake news that's the core issue, it's the echo-chamber - people being shown more of what they agree with, and subsequently engaging with that content, has then lead to that content having more reach and influence. And because the algorithm learns from your on-platform actions, they're also getting less and less exposure to the other side's perspective, thus, their viewpoint is only being underlined further and further with each post.
For example, if you saw a post in your feed that said that Hillary Clinton knowingly funded terrorism and you clicked on it, Facebook would take that as an indicator that you're interested in that topic, and you might suddenly see more stories along the same lines, while opposing views would be lessened, based on your now noted political preference.
That polarization was highlighted in a study conducted by The Guardian in which they asked ten voters - five conservative and five liberal - to switch sides and view Facebook feeds from the other perspective in the final month of the campaign. The Guardian created two Facebook accounts, one that followed pro-liberal news sites and one pro-conservative, then handed them to the participants of the opposing view.
The participants were amazed at the coverage they were seeing - one described the experience as being "like reading a book by a fool", while another said it was "like being locked into a room full of those suffering from paranoid delusions".
The experiment showed that the content each side is seeing is very different - but it wasn't necessarily fake news or lies that were influencing their choices. In fact, none of the participants changed their vote as a result, and none of them made specific note about lies being debunked by the other side. The participants had pre-existing beliefs which had been fueled by their own experiences and connections - what they were seeing on Facebook only helped to reinforce them.
One participant noted that:
"I did unfollow a lot of friends because I didn't want to feel enticed to correct what they were saying and get in a fight."
This seems to be a larger concern than fake news, that people are so easily able to switch off dissenting views. We presume that those on the other side of the debate are seeing the same evidence we're seeing because we're totally eliminating the counter, which is why we're so incredulous when so many end up supporting the opposing view.
In this sense, the role that fake news played in the process is only part of the bigger issue. Definitely, the increased spread of fake stories would have helped justify people's existing leanings, and that, you can assume, had subsequent influence on undecided voters. But on a broad scale, if you were to remove fake content from the equation, it may not have actually changed the result.
While eliminating fake news is a positive step in any case, the echo chamber is the more divisive element, and the real concern we should be discussing in regards to Facebook.
3. What Comes Next?
And here's the bigger question. Now that Trump is the US President and the nation seems more divided than it has in years, what happens next? What influence will social media echo chambers have in reinforcing the existing divisions and pushing people to take sides?
More than ever, people need understanding - they need to understand why people would vote for one side or another, what lead them to this conclusion, because it seems that the majority were clearly shocked at the result and that people are genuinely struggling to see why voters of the opposing view have given their support to either candidate.
Right now, that's the perspective we need in order to see the problems and issues that are present and move forward, but the concern here is that with that echo chamber effect in place, that may not be possible. If people's existing views are only being reinforced by the media they consume, then we're likely to see those divisions become bigger, more significant concerns over time. And given the current state, that's a huge issue, and something that needs to be addressed.
In this respect, analyzing Facebook's possible role in the 2016 US Presidential Election should be less about how the results played out and more about how people are getting their information, and how we can ensure there's more diversity of opinion being shared to better align these two opposing perspectives. Sure, we're never going to all see eye-to-eye, but if we're divided now, and there's an influential medium in the middle that's built, essentially, around fueling that division even further, it seems that the situation is only going to get worse.
And the solution could be hard to pinpoint - Facebook works to give users more of the content they like and want to see, which ultimately puts the power in their hands. If users want to narrow their perspective, they can. Given this, how do you prevent people from shutting out opposing voices and gaining more understanding of those around them?
It used to be that you'd watch the TV news, you'd read the newspaper - all of these sources would at least provide some exposure to alternate perspectives or events. But now it's perspective-based, it's interest-based - you can hone in on only the things you're interested in and just see more and more of that, underlining, reinforcing and growing your opposition or agreement.
This is likely to be one of the most pressing issues of the next stage of media development, and right now it feels like a flashpoint, a critical juncture where the true impacts of this new media construct are becoming clear. How we address them, now that we've come so far in this new self-curation and user-controlled cycle, is something no one can answer at this stage.