It's taken some time, but Facebook has finally announced that it will step up its efforts to limit discriminatory ad targeting by removing a range of specific ad targeting options to better ensure it's systems are not used to limit audiences in an unfair manner.
The changes relate to housing, employment and credit ads - as detailed by Facebook COO Cheryl Sandberg:
- Anyone who wants to run housing, employment or credit ads will no longer be allowed to target by age, gender or zip code.
- Advertisers offering housing, employment and credit opportunities will have a much smaller set of targeting categories to use in their campaigns overall. Multicultural affinity targeting will continue to be unavailable for these ads. Additionally, any detailed targeting option describing or appearing to relate to protected classes will also be unavailable.
- We’re building a tool so you can search for and view all current housing ads in the US targeted to different places across the country, regardless of whether the ads are shown to you.
The changes have come about as a result of action taken against Facebook by National Fair Housing Alliance, the American Civil Liberties Union, and the Communication Workers of America. After various investigations found that Facebook's granular ad targeting options could be used in a discriminatory - and illegal - manner, Facebook has essentially been forced to shift focus and implement these updates.
The first discovery on this front was uncovered by ProPublica in 2016, which found that Facebook’s system enabled advertisers to exclude black, Hispanic, and other “ethnic affinities” from seeing ads.

Facebook updated its policies to prevent such usage in early 2017, but still, it was entirely possible for advertisers to continue utilizing such exclusions through Facebook's complex ad targeting system. In the wake of the Cambridge Analytica scandal, Facebook removed more than 5,000 ad targeting options along the same, anti-discrimination lines, while it also rolled out a new, opt-in agreement process which gave them some legal enforcement option against businesses which chose to utilize such process.

But those measures still didn't stop a business from targeting and/or excluding specific audiences - Facebook's latest update takes its efforts a step further, which should significantly limit the capacity of businesses to focus on audience segments in a discriminatory way.
That said, given the complexity of its targeting algorithm, Facebook can't entirely rule out its system being used for such purpose in some manner. In examining the update for The New York Times, Pauline Kim, a professor of employment law at Washington University in St. Louis, notes that:
“It’s within the realm of possibility, depending on how the algorithm is constructed, that you could end up serving ads, inadvertently, to biased audiences.”
Facebook has thus far refused to provide more insight into how its ad targeting algorithms work, which leaves this element unclear, and it is possible that, based on user behaviors, the algorithm could be trained on biased habits and behaviors, facilitating discriminatory ad targeting by default.
That's an inherent problem with algorithm-defined systems - because algorithms are trained on data obtained from actual usage, they're also tilted towards existing biases within the chosen audience. That means that societal discrimination could still be part of the calculations - if more users within a single category register more interest in a certain thing, they'll logically be targeted as a result, which is already biased based on the sample.
How, and even if, that can be avoided is still a hot topic among machine learning experts, but even so, anything Facebook can do to limit active targeting in this regard is a positive step.