After one of the company’s toughest weeks on record, Facebook has now been left with a significant mess to clean up. But how can the company do it - what can Zuckerberg and Co. do to repair their brand, and reassure users that their data is safe?
Really, they can’t. As I’ve written previously, once the data is out, there’s not much they can do about it - they can conduct all the forensic audits they want, but the information will remain out there in some form, and will likely remain useable and indicative. But Facebook needs to show users that it’s improved, that it’s learned its lessons, and that their systems are now the better for it.
Will that be enough? Will some users turn away – or will it blow over and we’ll go back to using Facebook and their various apps as we always have?
The answers to these questions could have serious implications for the platform’s future, while the changes implemented may also make it more difficult for social media marketers to gather audience insights.
Here are some of the updates Facebook has announced thus far.
Over on the ‘Facebook for Developers’ blog, an announcement has been released regarding new rules around how the platform can be used.
As explained by Facebook’s Ime Archibong:
“To maintain the trust people place in Facebook when they share information, we are making some updates to the way our platform works.”
Among those updates, Facebook has announced that they’ve paused their app review process, which includes a halt in new bots being added to Messenger. This will enable Facebook to conduct an in-depth review of their platform, which will include an investigation into:
“…all apps that had access to large amounts of information before we changed our platform in 2014 to reduce data access, and we are conducting a full audit of any app with suspicious activity.”
How, exactly, Facebook will go about managing that process is hard to know – if an app developer, for example, had accessed all that data, then saved it elsewhere, it may be impossible for Facebook to track. And you’d have to think that would have happened, considered system changes and database updates, etc. since 2014.
In this case, it seems that Facebook won’t be able to do a lot more than to ask for assurances from developers that they have indeed gotten rid of all the data they downloaded – which is what Cambridge Analytica reportedly told them too. Still, as noted, Facebook needs to do something. It may not eradicate all potential uses of their data, but they can’t just shrug their shoulders and move on.
In addition to this, Facebook says that it will inform users if an app is removed for data misuse.
“If we find developers that misused personally identifiable information, we will ban them from our platform. Moving forward, if we remove an app for misusing data, we will notify everyone who used it.”
Note that Facebook says that ‘moving forward’ they’ll do this, not retrospectively. You can only imagine the chaos that would ensue if Facebook were to inform all 50 million of the reportedly misused profiles that their data may have been sold.
Facebook also intends to better highlight the app permissions each user has provided, to make people aware of what they may be sharing, and who with, while they’re also expanding their ‘Bug Bounty’ program to help find potential misuses.
Facebook also says that they’re updating their policies and terms around data use, and putting more regulations on business-to-business applications.
In addition to these mostly back-end changes, you can also expect Facebook to start rolling out more visible measures to reassure users that they’re in control of their data.
As noted, you’ll likely see new prompts appear in your feed some time soon which outline how you can control your app permissions, while TechCrunch is also reporting that Facebook’s trying out 'expiring friend requests', which may be another visible measure to help users feel more comfortable about their data sharing and permissions.
Expect Facebook to investigate similar options which put time limits on data-related requests, or give users more options on what, exactly, they want to share.
For example, Facebook could add in a new pop-up when you give an app permission to access your Facebook account, which would enable users to choose the specific info their want to share. That type of process is already available, and used by some apps, but Facebook may now make it compulsory, giving users more control.
Will that solve the problem? Probably not – people rarely read through the terms and conditions, which is largely what’s got us into this situation in the first place. But it may help Facebook provide that extra reassurance, which is what they really need at this stage.
There's also a question as to how much this might change Facebook's product roadmap timeline - reports have already indicated that they're looking to delay the rollout of their smart speaker device, while it could also make it harder for Facebook to push ahead with their monetization efforts on Messenger and WhatsApp. With people more concerned about their privacy, any such push will come with a higher level of risk - and if Facebook's brand is now associated with data concerns, that could cause force a larger re-think of strategy.
Facebook's not the only company utilizing personal data in this way, but for the first time, it's size may actually work against it, sparking fears of what the company knows, and how it could exploit such insights.
More Data, More Problems
Really, this gets to the heart of the broader issue - misuse of Facebook’s data is a massive concern, because Facebook has so much of it, but even if you took Facebook out of the equation, you’d still have a significant issue.
Think about all the apps and devices we now use each day that track our personal data. Your phone tracks your location, making it easy for you to find your way around. Your smartwatch tracks your pulse rate and the distance you’ve walked. Your shopping center loyalty card logs your purchases, smart home devices are able to perform an expanding range of tasks. Netflix knows what you watch.
As with Facebook Likes, all of these things seem innocuous in isolation, but on a large enough scale, they can be highly indicative of your personality, your habits. Your psychological state.
The issue is that the rate at which we’re uploading our personal data is outpacing our understanding of what can be done with it. Ideally, all that data is used for good, to help us live better lives, to connect us with discounts and show us more personally relevant content. But there’s a dark side to that data too, and people with bad intentions can turn it against you.
This is why the Facebook case is so important - but it’s also not only Facebook that should be under scrutiny. Definitely, in this case, it’s Facebook data that’s been utilized, but any standardized changes would have to apply to all big data access more broadly - and as noted, the key issue is that our regulatory processes have not kept up with technological innovation on this front.
Facebook needs to reassure users, definitely, they need to show that they’re taking this seriously and working to fix the data issues. But really, we need the discussion to focus on big data on a broad scale, and if, and how, we can best mitigate such concerns in future.