In news that will probably surprise no one, Meta has been found to be looking to sneak through a controversial system update, which would give it more data to train its systems, with the company seemingly hoping that broader political chaos and turmoil in the U.S. will disguise this effort.
According to reports, Meta is planning to add facial recognition to its artificial intelligence-powered sunglasses, as a means to enhance connection.
Which is not overly surprising, given the added connectivity benefits this could provide for glasses wearers. But facial recognition has long been a sensitive area, with Meta shutting down its facial recognition processes on Facebook entirely in 2021, after user backlash around the automated detection of faces in images, particularly via photo tagging.
But more recently, Meta has been quietly bringing Face ID back for account security purposes. And as such, it’s not a big surprise to see it also looking to add it into its glasses, though that would also potentially open up a much bigger debate about non-user privacy, and the broader information mesh that Meta could build based upon that data.
Meta knows this will be controversial, but apparently, it also has a plan to limit negative impacts.
According to reports, Meta has been hoping to quietly add this feature amid broader political disruption in the U.S., in order to limit public blow back.
As per an internal Meta communication (as reported by The New York Times): “We will launch during a dynamic political environment where many civil society groups that we would expect to attack us would have their resources focused on other concerns.”
Yeah, that’s not great, though it’s also not a major surprise for the company formerly known as Facebook.
Why is that?
Because, unfortunately, Meta now has a long history of questionable practices when it comes to avoiding regulation, limiting negative exposure and/or making changes that could have a positive impact if they might also impact the company’s bottom line.
- In 2009, a study conducted by the University of Cambridge demonstrated how Facebook profiles could be used to reveal personal information on millions of people.
- In 2012, Meta conducted a study that altered the news feeds of nearly 700,000 users to test the impacts of injecting more positive or negative content. Users were not informed that they were part of the study pool, which ultimately found that the content you’re shown can impact your mental state.
- In 2019, it was revealed that Meta had been paying teen users to track their app activity, both on its own apps and others, as a means to measure the competition. Meta had a long history of tracking app trends in this way, all of which were found to be violating Apple’s Terms of Service, and were subsequently shut down.
- In 2020, leaked internal notes revealed that Meta had studied how Facebook amplifies polarization, then shelved the data due to concerns that it would impact engagement.
- In 2021, former Meta employee Frances Haugen claimed that the company had knowingly avoided taking stronger action to address some of the most harmful elements of its platform, due to the impacts any such moves could have on usage, and thus profits. Haugen claimed that Meta internal research showed that Meta failed to address concerns with hate speech and anti-vaccine content, and buried research that showed that Instagram was harmful to teens.
- Also in 2021, Meta shut down a German research project that had been monitoring algorithmic amplification on Facebook and Instagram. It also shut down an NYU study examining political ads and COVID misinformation.
- In 2023, a Harvard University research project that looked at the spread of disinformation on Facebook was shut down after Meta CEO Mark Zuckerberg donated $500 million to the school.
- In 2023, internal communications from Meta, which had been unsealed as part of a court case, showed that Zuckerberg had personally and repeatedly shut down initiatives designed to improve the well-being of teens on Facebook and Instagram, at times directly overruling senior executives.
- In 2025, Meta was found to have been training its AI database with an illegal library of pirated books.
- Various studies have also shown that Facebook can have a significant impact on political division in smaller nations, where Facebook is often a key connector to the internet more broadly.
Given the company’s history, it’s not really a surprise that it’s become adept at disguising its controversial updates. But that doesn’t, of course, make it any better, and the fact that Meta is trying to hide a change that it knows will cause negative blow back is a concern. And if anything, it shows that we should be more aware of the privacy impacts here.
Facial recognition technology is increasingly being used for expanded detection and enforcement efforts, including identifying people entering sports stadiums and then matching their criminal and/or credit history in real time. In China, facial recognition technology is even being used to catch people jaywalking, and send them fines in the mail, or to further penalize people who’ve not paid parking fines. Or worse, such systems have also been used to identify Uyghur Muslims and single them out for tracking.
I highly doubt that people will be open to Meta implementing a system that could facilitate such, at a huge scale, via its AI glasses, and definitely, the negative impacts could hurt the company’s bottom line, in terms of AI glasses sales, for one.
But Meta, seemingly, is hoping that you won’t really notice, because of everything else going on.
It’s not a great look for the company, which hasn’t really improved its reputation since the Meta rebrand.