As was widely reported earlier in the month - and has now been made official the Justice Department - Facebook has been fined a record $5 billion for data privacy violations stemming from the Cambridge Analytica scandal, in which data from more than 50 million Facebook user profiles was accessed by the analysis company for the purposes of influencing voter actions. Additionally, Facebook will also pay a $100 million fine from the SEC, also related to the CA issue.
Facebook had been preparing for this penalty - in its Q1 performance update, Facebook warned investors that a significant fine was coming, which could impact its bottom line numbers in Q2. At the time, Facebook said that it had set aside $3 billion for such, so the final announced number is significantly larger. But as many have noted, for Facebook, which is on track to bring in $60 billion in revenue in 2019, the actual financial impact is relatively minor.
In addition to the fine, however, Facebook has also agreed to a range of expanded measures to help protect user data and information privacy moving forward.
As explained by Facebook:
"We’ve [also] reached an agreement with the Federal Trade Commission that provides a comprehensive new framework for protecting people’s privacy and the information they give us. The agreement will require a fundamental shift in the way we approach our work, and will place additional responsibility on people building our products at every level of the company. It will mark a sharper turn toward privacy, on a different scale than anything we’ve done in the past."
The full requirements of the main agreement, listed on the FTC's website, are as follows:
- Facebook must exercise greater oversight over third-party apps, including by terminating app developers that fail to certify that they are in compliance with Facebook’s platform policies or fail to justify their need for specific user data;
- Facebook is prohibited from using telephone numbers obtained to enable a security feature (e.g., two-factor authentication) for advertising;
- Facebook must provide clear and conspicuous notice of its use of facial recognition technology, and obtain affirmative express user consent prior to any use that materially exceeds its prior disclosures to users;
- Facebook must establish, implement, and maintain a comprehensive data security program;
- Facebook must encrypt user passwords and regularly scan to detect whether any passwords are stored in plaintext; and
- Facebook is prohibited from asking for email passwords to other services when consumers sign up for its services.
In addition to these measures, Facebook has also agreed to additional oversight from FTC and Justice Department officials, adding more levels to its transparency and detection methods.
The internal impacts, for the company, are significant, though they likely won't lead to any major shifts in how advertisers use Facebook's data options. There will be changes to what types of data can be used, which could impact elements like Lookalike Audiences, and Facebook has already warned advertisers of potential limitations based on use of its 'Clear History' tool. But advertising, and in-depth audience targeting, will remain part of Facebook's DNA - though Facebook has noted that it is also undertaking a new review of its systems, which could uncover more issues it needs to resolve.
And while the announcement is not expected to alter the company's path, it is a huge development in relation to online data collection for advertising overall. In a related, but separate case, the FTC has also announced separate law enforcement actions against Cambridge Analytica, its former Chief Executive Officer Alexander Nix, and Aleksandr Kogan, an app developer who worked with the company. The filing alleges that CA "used false and deceptive tactics to harvest personal information from millions of Facebook users". Kogan and Nix have also agreed to a settlement with the FTC, which will restrict how they conduct any business in the future.
In many ways, the Cambridge Analytica case was something that had to happen in order to reign in the misuse of user data, which had become a major concern - though one that no one was addressing. No platform in history had had access to the depth of information that Facebook now has, while other social platforms are also gathering user data and insights at a massive scale. As we now know, such insights can be used against us, which is why we need regulations like this in order to protect people against misuse through psychometric targeting and subliminally influential actions.
Yet, at the same time, the CA case also highlighted that such targeting is possible, which will likely lead to more organizations seeking to utilize the same. Which, again, is why we need this more stringent data usage framework.
Given this, Facebook's new regulations are a positive step - and Facebook notes that:
"The accountability required by this agreement surpasses current US law and we hope will be a model for the industry."
It seems almost laughable that Facebook could be used as a model for data privacy regulations, but maybe, under this new process, it will become exactly that.
Data privacy will be a major issue moving forward, and hopefully this agreement sets a new framework for how such is monitored.