Facebook's advanced ad targeting has again come under scrutiny after a recent report, published by The Australian, showed how - to quote the article:
"Facebook is using sophisticated algorithms to identify and exploit Australians as young as 14, by allowing advertisers to target them at their most vulnerable, including when they feel "worthless" and "insecure", secret internal documents reveal."
The report suggests that Facebook gave a presentation to one of Australia's top four banks in which they showed how advertisers could use the network's targeting tools to hone in on young people when they're feeling 'stressed' or 'anxious', and thereby use their emotional state to boost response to their ad content.
Facebook has since posted an official response to the report's claims, saying that "the premise of the article is misleading."
"Facebook does not offer tools to target people based on their emotional state. The analysis done by an Australian researcher was intended to help marketers understand how people express themselves on Facebook. It was never used to target ads and was based on data that was anonymous and aggregated."
Facebook's correct, among the millions of potential ad targeting options, emotional state is not one of them. But then again, it is a little disingenuous of The Social Network to side-step the claim completely - Facebook themselves recently published a report which provides insights to help marketers reach people who've recently broken up with a partner, effectively capitalizing on their emotional state.
As such, tapping into the potential of emotion, and emotional vulnerability, is part of the Facebook marketing mix, and there are other measures within Facebook's ad targeting tools that point to such. Targeting by 'relationship status' (as noted) or 'life events' obviously include a level of emotional understanding, and the potential to use, or misuse, that to best benefit. Facebook might not spell it out, specifically, but it's part of the process.
The wider concern in this case is that we're talking about younger users, people who are already emotionally vulnerable.
Should you be able to target young people - or anyone - based on their emotional state? Is this a wider debate we need to have?
Interestingly, a former Facebook product manager has noted that this approach fits into how The Social Network has traditionally used their vast data stores, noting that:
"For two years I was charged with turning Facebook data into money, by any legal means."
Antonio Garcia-Martinez, who worked at Facebook between 2011 and 2013, says that he doesn't necessarily see this type of targeting as unethical, or in using any trends and correlations that show up in their data.
"Why should those examples of targeting be viewed as any less ethical than, say, ads selling $100 Lululemon yoga pants targeting thirtysomething women in affluent postal codes like San Francisco's Marina district?"
Garcia-Martinez makes an interesting point - the concept of Facebook's in-depth ad targeting is based on honing in on those most responsive to your ads, which, inevitably, also means those susceptible to them. Those users, of course, don't have to buy anything targeted at them - and I wouldn't imagine there's a lot of products that benefit from being shown to users in distressed states - but the question comes down to whether it's ethical to do so. But is that any different to any ad targeting?
There's no doubt Facebook data can be used to infer intimate details about a user. Back in 2015, I interviewed Dr. Michal Kosinski, the researcher behind a controversial report which looked at how people's Facebook activity could be used as an indicative measure of their psychological profile.
Kosinski and his team found that a person's Facebook data could be used to predict almost anything, including whether you were a smoker, whether your parents were divorced, sexual orientation. On a wide enough scale, with enough data points to correlate, you could use Facebook insights to develop a more accurate psychological profile of a person that their friends, their family, even their partners were able to do.
Facebook has access to all of these data points, they can do this correlation themselves - the capability is not in question. What's concerning is how such insights might be used (and note, Kosinski himself was hesitant to give me any access at all to his data tools, due to the potential for misuse).
The power of this data was also raised in the recent US Presidential election, when reports emerged that a group called Cambridge Analytica were behind the intricate targeting and messaging process which, eventually, saw Donald Trump elected (they've also been linked to Brexit). Facebook data was, reportedly, amongst their information sources (though CA has since denied this), which further underlines the power and influence Facebook holds.
And while Facebook has denied any such correlations in use of their data, really, it's not hard to see how Facebook would be able to do such. The Social Network has more than 1.8 billion active users, while research suggests that around 44% of the entire U.S. population now get at least some of their news from the site.
That's a lot of data to tap into, a lot of correlating and supporting data points - and a lot of influence in their output. To think that Facebook hasn't considered using such insights for this purpose is naive - their advertising model is built on generating the best response for your ads, by showing them to the most receptive audience. A lot of that comes through anonymized, aggregated data, which, in other words, means you have no idea how, exactly, they're choosing the most relevant audience for your content.
For the most part, this just means each user is seeing more relevant ads, but as we've seen with the spread of fake news, there are ways to mis-use and manipulate the Facebook system.
If that makes you uncomfortable, if you don't like the thought of Facebook potentially helping advertisers target vulnerable young people, the solution is to get of Facebook. Shut it down, delete your account, remove as much of your data as you can, if you can.
The alternative - and more likely outcome for most - is that you accept it. Or ignore it, which ever makes you feel better. For better or worse, this is the new age of data tracking we live in. That's not to say its right, that's not to say you should just turn a blind eye and move on. But data tracking is only becoming more advanced, and instances like this are going to become more common, whether you're aware of them or not.
Is it a concern? Absolutely. Will people stop using Facebook as a result? Probably not.
Maybe the next anti-smoking type movement will be anti-Facebook, with illustrative examples of how data misuse can have negative impacts on society.
Either way, the debate over data ethics is set to rage on for some time yet.