For many, 2018 was the year that we gained a much clearer understanding of exactly how much of our personal data digital platforms are tracking, and how they're keeping tabs on us across the web.
The year started with the revelation that Cambridge Analytica had misused Facebook user data to target people with political messaging, which then lead to digital leaders appearing before US Congress, the implications of the GDPR, calls for Facebook, specifically, to be regulated, criticism of company policies, questions about how platforms failed to protect information, and on and on.
For the first time, we were given more insight into the depth of data platforms have on us, and how they use it - which is concerning, particularly given such information has already landed in the wrong hands. But at the same time, I would hazard a bet that, despite all this, the majority of users have not changed their digital platform usage one bit.
Certainly, that's reflected in Facebook's data - Facebook's usage rates continued to climb following the Cambridge Analytica reports, with only Europe registering a small decline in monthly usage.
Facebook itself also noted that in June that it hadn't seen "any meaningful impact" on user behavior since the Cambridge Analytica scandal. While the case itself raised significant concerns, users, in the majority, seemed to move on, and continue on as normal.
Why is that? Why is it that we seem less concerned about transmitting our personal insights than we do about what we'd miss out on if we were to de-activate our Facebook accounts?
The main issue appears to be context. While it sounds bad that companies and bad actors are able to access in-depth personal insights about us, for the most part, the worst they seem to be able to do is target us with ads. So what if you get shown more relevant ads and content?
People like to believe that they're in control of their own leanings, that they're the ones who choose to respond or not to a post or promotion. Knowing their potential psychological preferences is less relevant than personal will - if I see a political-based ad, for example, I can choose how I respond. Right?
The difficulty here lies in explaining, in relevant terms, how such targeting could be impacting your behavior - and it's not just from the advertisers and activists themselves, but it's also how Facebook, or any other digital platform, might choose to show you specific content to improve its own engagement.
For example, in recent talk titled 'How Facebook tracks you on Android (even if you don't have a Facebook account)', researchers Frederike Kaltheuner and Christopher Weatherhead discussed their findings into how Facebook and Google use tracking tools built into many apps to form profiles on users. Their findings are fascinating - take a look, for example, at this listing of tracking parameters which are implemented even if you choose to opt-out of ad personalization.
In another section of the talk, Kaltheuner discussed how information shared by a range of popular apps could help determine a users' personal leanings - even if they didn't use Facebook itself.
"Our first finding is that the vast majority of apps share data the second it's opened, and the data that's being transmitted indicates what kinds of app you use, when you use them, combined with a unique ad ID. And knowing what kinds of apps somebody uses, and when, can give quote a detailed picture of someone's life."
Kaltheuner provides an example using just four highly downloaded apps - 'Qibla Connect', which is a Muslim prayer app, 'Period Tracker Clue' which tracks menstruation cycles, job search app 'Indeed', and the kids app 'Talking Tom' (worth noting, each of these apps have been downloaded at least 10 million times each, so they are very popular and highly used).
"That looks like a person who is likely Muslim, likely female, likely looking for a job and who likely has a child."
Knowing this, the platforms themselves could be targeting users with specific information based on their likely interests - and not just ads, but posts. If Facebook wanted to boost engagement, it would make sense for them to use such insight to show these users posts from Pages discussing related topics. The users would be more likely to click-through on such posts, to engage with that content, Facebook would logically be able to use the information being submitted to entice more on-platform activity, not necessarily for nefarious purposes, but to keep people around for longer.
The problem with that is it could skew a users' perception of what's happening in the world. Let's say Facebook determines that the user in this example is interested in trending Muslim issues, due to their religious leanings, so the algorithm shows them stories about terror attacks, criticisms of Muslims in western countries, fake news about protests or anti-Muslim sentiment. Such stories generate a heap of engagement on Facebook, so it would make sense that a Muslim user might be shown this, which would then lead to more biased view of such, based on what they're seeing.
In this sense, the control of information can override free will - people respond to the information shown to them, and that's especially true if such reports play to confirmation bias, reinforcing things they believe to be correct, whether they are or not.
It's not just malicious actors who could be skewing perspectives, but algorithms themselves, which leads to more division, more anger and more tension within society, with each side being largely blind to the other's perspective.
But that broader context is hard to explain, it's hard to demonstrate the complexities of how such intricate targeting, based on your personal behaviors, can have such significant impacts on your perception. It's likely why Facebook opted to prioritize posts from your connections as opposed to Pages in News Feeds (your friends' posts are probably less divisive than algorithm-chosen highlights based on your particulars) and why it chose to deactivate its 'Trending News' section, which was personalized based on your behavior. The ways in which your opinions could be shaped by the intricate details being tracked - through separate app use as well as on-platform - are hugely significant, and largely invisible to you and your sphere of perception.
And when you also consider that Facebook is now a key source of news content for a growing number of users, you, again, get a better understanding of the potential concern.
Yet, despite this, despite the issues raised about digital platform data tracking and its potential impacts, a recent report found that the average Facebook user would "require more than $1000 to deactivate their account for one year".
Social media is now a key part of our interactive process, it's what we do, how we connect. And without broader context as to why data misuse is such a major concern, or how platforms likely can't be held responsible to protect us from such (given it's the basis on which their business is formed), it's hard to see this changing.
Maybe 2019 will be the year that data privacy is taken more seriously, and that we start to see a significant push back against such practices within the digital industry. But I doubt it.
What would you do without your apps, without Facebook? The benefits outweigh the concerns - at least, without more relevant context as to what those concerns actually are, and how they actually impact on our day to day lives.
Will 2019 be the year that such context is made clearer, and actual changes put into effect?