You wake up, you turn on the coffee pot, you pick up your phone and do a quick check of your e-mail notifications and social media feeds. No doubt many of you follow a similar routine - these days, social media's pretty well embedded into the everyday lives of a huge, and ever-growing, number of people around the world. We're always connected, we're constantly tuned in to the latest news and updates - information reaches us at a much faster rate than it has in generations past. But that connectivity, and our growing reliance on our social media feeds, could also be used against us.
While many people try to avoid confronting the potential dangers of such a refined information flow, there's no doubt that the means through which we consume media can be manipulated and used for ill-means, how the information we're transmitting back to social media networks and data providers can be re-purposed by marketers and advertisers, and anyone else with enough money and time, in order to influence our decisions. It may seem like nothing - you like a picture of a cat, you comment on a friend's photo - but all these actions add to a wider data pool, all the time you spend on social can be tracked, categorized and used to learn more about who you are, what you're interested in and, ultimately, how you can be reached with targeted messaging to influence your perspective.
This is the biggest fundamental shift in media process, and one which we've not yet seen fully borne out - in the past, newspapers and TV networks could influence our decisions by elective broadcasting, taking sides and sharing arguments which they support. But generally, that meant preaching to the choir - people would listen to this radio broadcaster or watch that TV station because they generally aligned with their views, their influence was somewhat limited to those who already shared their viewpoint. But social media, and the new world of social media data, is changing that - now, you can use social media data profiling to learn all you need to know about specific audiences and audience subsets, which can then be used to tailor your messaging for each segment in order to press their emotional response buttons and build support through psychographic targeting.
Considering this, could your social media activity be used to manipulate your thinking and change the way you vote in an election? There were a couple of interesting examples this week highlighting the possibilities of how social networks can play a significant role in the political process.
But first, some background. Back in 2010, around 340,000 extra voters turned out to take part in the US Congressional elections because of a single election-day Facebook message. This is based on researcher estimates - the findings were released two years later as part of a wide study into how Facebook can influence voter turnout and play a part in the electoral process.
The means through which these voters were influenced to vote was pretty simple:
"About 611,000 users (1%) received an 'informational message' at the top of their news feeds, which encouraged them to vote, provided a link to information on local polling places and included a clickable 'I voted' button and a counter of Facebook users who had clicked it. About 60 million users (98%) received a 'social message', which included the same elements but also showed the profile pictures of up to six randomly selected Facebook friends who had clicked the 'I voted' button. The remaining 1% of users were assigned to a control group that received no message."
The results of the test showed that users who received the informational message (the top message in the above image) voted at the same rate as those who saw no message at all, while those who saw the social message - with images of their friends included (lower example in above image) were 2% more likely to click the 'I voted' button and 0.4% more likely to head to the polls than the either group. Researchers estimated that the social message directly increased voter turnout by 60,000 votes, while a further 280,000 people were "indirectly nudged to the polls" by seeing messages in their News Feeds - notifications that their friends had voted.
When looking at those numbers, the percentage results seem minor - 0.4% of people being more likely to vote doesn't seem like a meaningful proportion, but when that's framed against the scale of Facebook, 0.4% of the more than 60 million people included in this test ends up being a significant amount. Now, of course, the final results of the 2010 US Congressional election saw the Republican Party regain control of the chamber, winning the popular vote by a margin of more than 5.8 million, so in context, the addition of 340,000 extra voters may not appear significant. But it could have been.
The experiment showed that Facebook could absolutely play a role in influencing how people vote, underlining the significance of Facebook as a means for motivating real-world response. And that was back in 2010, when Facebook had only 608 million monthly active users in total. Facebook in 2015? 1.55 billion MAUs. You can bet that influence is significantly larger now, and growing every day as more young, digital-native users hit voting age.
And this wasn't the only time Facebook experimented with users to see if they could influence voter behavior - in the 2012 Presidential Election, Facebook reportedly showed a random selection of 1.9 million users more news stories in their News Feeds, a move which lead to a 3% increase in the number of people who voted from that group. Again, this seems small, but these are limited experiments, they're not aimed at delivering big results. They've been designed to see whether Facebook has the ability to influence actions, not necessarily inspire widespread action.
The results show they can, and you can bet that these are only a couple of the many experiments The Social Network has conducted on this front - so given that Facebook has the capacity to influence voter response, should we be concerned about Zuckerberg and Co to use that capability to benefit their own interests?
A Question of Trump
This query came up again this week after Mark Zuckerberg released an official response to US Presidential Candidate Donald Trump's recent call to ban Muslims from entering the United States. Zuckerberg took to Facebook to voice his support for Muslims, saying that they would always be welcome on his network.
I want to add my voice in support of Muslims in our community and around the world.After the Paris attacks and hate...Posted by Mark Zuckerberg on Wednesday, December 9, 2015
But given Zuckerberg's response, and Trump's speech bordering on violating Facebook's terms, would The Social Network actually consider banning Trump from Facebook, or censoring his posts in order to support the opposing side of the argument?
This question was raised by Alex Kantrowitz on BuzzFeed, who noted that:
"Facebook's next steps aren't clear. The company could remove Trump, or his posts, from the platform, and effectively become a censor of political speech. The company's statement, which said it's looking at this content on a case by case basis, already implies that this is an option."
This places Facebook in a difficult position - Facebook's pushing to become a bigger source of online news and information, and as such, there's an implied need to maintain impartiality and balance. But at the same time, we know that Facebook could, ever-so subtlety, influence voter behavior to ensure Trump doesn't gain traction. Would they do that?
At this stage, you'd suspect the answer is no, Facebook's long stated that they're not the arbiters of 'quality', in a content sense - their News Feed algorithm, for example, highlights material of interest to each user, regardless of what that material might be. But it is an interesting question - if Facebook felt strongly enough, they could use their influence to change an election outcome. It's even within the realm of possibility that they already have.
This then leads to another controversial use of Facebook data in the US Presidential campaign - a report in The Guardian yesterday detailed how US Presidential candidate Ted Cruz is using psychological data, based on research harvested from Facebook users - largely without their permission - to refine his messaging and boost his campaign.
The report suggests that Cruz is working with a data company called 'Cambridge Analytica', which is run by researchers from Cambridge University, to create detailed psychographic profiles of US citizens based on their Facebook activity. The research behind this data (if the report is correct) sounds very similar to the psychological profiling study conducted by The University of Cambridge and Stanford University which looked at how people's Facebook activity could be used as an indicative measure of their psychological profile. That report found that, based on Facebook activity alone, the researchers could determine a person's psychological make-up more accurately than their friends, their family - better even than their partners (I spoke to the head researcher, Dr Michal Kosinski, earlier this year and the findings he noted were pretty amazing).
According to The Guardian:
"Analysis of Federal Election Commission (FEC) filings shows Cruz's campaign has paid Cambridge Analytica at least $750,000 this year. The "behavioural microtargeting" company has also received around $2.5m over the past two years from conservative Super Pacs to which [Republican donor Robert Mercer] or members of his family have donated."
The report suggest that Cruz and his campaign team are using this data to create "highly targeted campaign messages", enabling Cruz to campaign on specific issues but communicate them in multiple ways to different audiences for maxmimum impact. Using such a data-driven process is clever, and will no doubt help Cruz gain traction, but there are obvious questions over the ethics of such process - should political parties be able to obtain and use information posted on Facebook to refine and specify their messages in this way?
As noted by Michael Zimmer, an associate professor at the University of Wisconsin:
"It's one thing for a marketer to try to predict if people like Coke or Pepsi, but it's another thing for them to predict things that are much more central to our identity and what's more personal in how I interact with the world in terms of social and cultural issues."
Yet, such is the new, data-rich world in which we live - using social media information, largely publicly available to the right expert/s or through groups like Cambridge Analytica who've obtained such insights through legal means, anyone with enough money could get detailed, focused psychological profiles of their audience, and then use those elements as trigger points to better refine their messaging. In the majority of cases the impact of this data use would be minimal - they can, of course, only target you based on your interests, they can't make you actually think something different to what you already do.
This is where things could get murky - if you were to know, for example, that certain users came from broken homes, from poor backgrounds or were more susceptible to addiction or manipulation, maybe that could be used in a way that would seem less acceptable. And maybe not by political candidates necessarily, but other groups with fewer ethical limitations.
None of this may come to anything, all of these little details could be lost in the shuffle of the wider Presidential race as the campaign wages on and the next issue takes over the spotlight, then the next. But it is interesting to consider the role that social media could be having in the background, and how such data and insights can be used in ways we may not even realize. It's a different ball game, and while the tools themselves aren't necessarily anything new, the scale on which they're available most definitely is. That's significant, and it's worth considering when looking at how information is spread, and how our growing reliance on social media plays a part in this.
Main image via Shutterstock