Do you ever see people sharing fake news stories on Facebook or Twitter, yet doing so in the belief that the information is 100% true? It happens a lot, the latest research on a new diet trend, for instance, or, using a recent example, the flurry of misinformation in the wake of the Paris terror attacks. And while a lot of those rumors can easily be identified as fakes, there are elements that are harder to pick.
For instance, Donald Trump copped a metaphorical beating on social media for tweeting this after the Paris attacks.
Isn't it interesting that the tragedy in Paris took place in one of the toughest gun control countries in the world?
- Donald J. Trump (@realDonaldTrump) January 7, 2015
Pretty poor form, right? 129 innocent people killed - how could anyone, let alone a man running for President of the United States, post something so insensitive?
The thing is, he didn't. As you can see from the date on the tweet, Trump posted this back in January, in response to another incident, yet somehow the tweet re-surfaced and was linked to the new attacks. And the backlash spread like wildfire through social networks - even the French Ambassador responded to it, criticizing Trump for his controversial viewpoint.
This is just one of many untruths that circulated in the immediate aftermath of the incident - others reports included claims that the terrorists were infiltrating foreign nations by posing as refugees, suggestions that the attacks were organized via highly intelligent, encrypted computer networks and the identification of an innocent Canadian man as a potential terrorist due to a Photo-Shopped image of him wearing a bomb vest.
Portraying @Veeren_Jubbal as a terrorist puts his life at risk, ruins his reputation. This is online terrorism. pic.twitter.com/U5bJYCnIsC
- Amy (@AmyStephen) November 15, 2015
These types of untruths have always existed, but they circulate faster than ever online, where information is Liked and shared and re-tweeted at lightning speed. The significance of such rumors and misinformation vary greatly, of course - there was a recent hoax story of a man in Australia who'd suffered bias for much of his life due to his name, in English, being a highly offensive statement. There was one that circulated last year about a man being stranded in an airport because his son had scribbled all over his passport. But when you also match these lighter stories up against major conspiracy theories - like how the Sandy Hook shootings were faked, or the above-mentioned 'news' around a major world event - you can see how such details can influence public perception and opinion. And given Facebook's now one of the top news sources for a growing number of people - nearly half of all Americans get political news from the platform - and the fact that Facebook's pushing to play a bigger role in the dissemination and circulation of news content, it's important to recognize the potential damage that can be caused by inaccurate and biased reporting, and that there's a need to do something about it to reduce the impact of such material.
Upholding the Truth
But how do you tackle the spread of misinformation? Facebook's algorithms are actually, in some ways, built to support the spread of such stories - the News Feed algorithm's designed to show you content you're most likely to be interested in based on previous content you're interacted with. This eco-system lends itself to confirmation bias, a state in which people will interpret information in a way that confirms their own preconceptions - for example, if you Like and comment on a story about the latest conspiracy theory, you're more likely to be shown more of the same content, while at the same time you can easily eliminate other perspectives that challenge your beliefs from your media inputs by simply deleting them from your News Feed and unfollowing their source.
This makes it harder for Facebook to control the spread of misinformation and untruths across the network - and really, it's not Facebook's place to control the spread of content either way, their mission is to provide people with the best on-platform experience by showing them more of the content they're most interested in.
But fake news stories and hoaxes fall somewhat outside of this aim - Facebook's research has found that many users who post fake news stories will later delete those posts after they've been informed that the story is incorrect. Facebook takes this as an indicator of a bad user experience and something people want to avoid - generally, people don't appreciate looking like fools - and as such, they implemented a News Feed tweak in January which reduced the distribution of stories flagged as hoaxes, as well as adding a note to those stories to denote them as possible fakes.
They also added in a new tool to enable people to report false news stories.
But given their renewed push into news and publishing, Facebook is now under more pressure to reduce the spread of fake news, and as such, today, The Social Network has released a News Feed update to further crack down on misinformation and the spread of hoax stories throughout the Facebook eco-system.
Refining the Algorithm
First, Facebook's detailed one of the ways in which they go about refining the News Feed experience, and how they're using that feedback to reduce the spread of false news content.
"As part of our ongoing effort to improve News Feed, we ask thousands of people every day to rate their experience and tell us how we can improve what they see when they check Facebook. People also take story surveys where they see two stories that could be in their News Feed and answer which they'd most want to see. We compare their answer to the order we would have put these stories in their News Feed. If the story picked is the one News Feed would have shown higher up, that's a good sign that things are working well. If the story picked is the one we would have put lower down, this highlights an area for improvement."
Using this measure, Facebook's able to get a better handle on how their efforts to refine the algorithm are going, and based on such feedback, Facebook's determined that sometimes the stories that do go viral, that are being re-shared and re-posted over and over again and inundating people's feeds, may not actually be of interest to large sets of users.
An example of a viral story that's likely not of interest to a large group? Hoax stories.
You know this yourself - people will be posting that same fake news story over and over, which you know is fake. You don't want to see that, so you unfollow or note that you're not interested in such content, and now Facebook will factor in such feedback in order to determine the spread of such stories and reduce their reach if they're being flagged as something a lot of people don't want to see.
"With this update, if a significant amount of people tell us they would prefer to see other posts more than that particular viral post, we'll take that into account when ranking, so that viral post might show up lower in people's feeds in the future, since it might not actually be interesting to people. With the hoaxes example, if the majority of people taking the survey say they would rather see another story in their feed than the viral hoax story, then we'll infer the story might not be as interesting, and show the viral story lower down in people's feeds in the future."
Of course, the tweak could have wider reach than just hoax stories - any viral story that's getting a lot of traction could lose momentum under this process. If a lot of people were sick of seeing updates on the blue/gold dress, for example, that could lead to stories like that seeing less reach, which could impact publishers like BuzzFeed and the like - though Facebook notes that "as viral posts are typically anomalies, and not an important part of distribution for Pages, we don't think this change will impact your Page's distribution".
Overall, the main focus of the change appears to be on reducing the spread of misinformation, which, given Facebook's increasing influence on the news cycle, can only be a good thing, though it's likely impossible for The Social Network to totally stamp out the spread of such content.
Rumors and Whispers
Back in 2012, Facebook sparked controversy when they released a report into how researchers had conducted an experiment on Facebook users to see if their on-platform inputs could be manipulated in order to influence how they would vote in the 2010 US Congressional elections. Their findings? They absolutely could.
"The results show that the messages directly influenced political self-expression, information seeking and real-world voting behavior of millions of people. Furthermore, the messages not only influenced the users who received them but also the users' friends, and friends of friends."
While many people consider Facebook a recreational activity, a place for sharing random notes and jokes with friends, its actual, real-world influence is undeniable. People are consuming news more than ever on Facebook - social networks, more than ever, are becoming the primary media input source for a growing number of people. With that, there comes a need to consider what information is being shared, and how it's being shared on the network. And while we may never be able to eliminate false information and the spread of rumors and fakes, it is heartening to see Facebook taking steps to reduce the impact of such content where they can.
In the case of questionable news, it's worth considering the sources of such information before clicking 'Like' and 'Share'. It may seem like nothing - the significance of you sharing a random story on some possible hoax may seem trivial - but the cumulative impact of the spread of misinformation can be damaging. In the new world, where everyone has the potential to be a publisher, it's worth taking note of the details to ensure you're reinforcing the correct message.
How do you prevent the spread of fake news? As suggested in this post by Luke O'Neil for Vice, ask yourself the following questions before taking action:
1. Who's Telling You About the Story? Is the source reputable and someone you trust? "If you've never heard of the website reporting a piece of news that you would like to share, then it's probably not true".
2. What's the Story About? As with anything, if it seems too good to be true, then it probably is. If all the details are too coincidental, if they support one side of an argument too conveniently, it's probably not true.
3. When Did This Story Happen? Make sure you check the dates of stories and images, or confirm incidents to corroborate their authenticity. Not sure about when an image was posted? Do a quick Google Image search to check.
4. Where Did the Story Happen? If the outlandish story relates to some far off remote region - where tracing the source information would be very difficult - it's probably fake.
5. Why Is This Story Being Shared? Consider the motivation behind why the story was shared. "If the story was clearly written just for the shares, then it's likely fake".