The Grok nudification controversy is set to cause even more problems for X and xAI, with another lawsuit being launched against the company, this time by a group of teenagers who claim that their photos were digitally stripped down and posted in public via X’s artificial intelligence tool.
As reported by The Washington Post, three teenagers, two of whom are minors, are suing xAI over the production and distribution of naked images of themselves that were generated by X’s Grok AI chatbot.
The teenagers claim that X’s AI chatbot generated the images from real photos, which were then made public via X’s Grok account, where people can see replies to the bot.
The teens are accusing X of distributing, possessing and producing with intent to distribute child pornography.
It’s the latest of several legal and/or regulatory actions launched against xAI and Grok over the controversy, which, at one time back in January, had seen Grok producing over 6,700 images every hour that would be categorized as “sexually suggestive of nudifying,” according to analysis by Bloomberg.
And a significant portion of those generated images also depicted minors.
As reported by The BBC, the Internet Watch Foundation (IWF), a charity group which aims to remove child abuse material from the internet, discovered various examples of explicit images of young girls that had been generated by Grok, with some victims as young as 11 years old. The IWF said that it had found "sexualized and topless imagery of girls" on a "dark web forum" in which users claimed they used Grok to create the imagery.
That runs counter to X owner Elon Musk’s pledge to combat CSAM material in the app, which Musk has repeatedly identified as a “top priority” for the app, while also claiming that previous Twitter management didn’t do enough to combat this element.
Indeed, X has repeatedly touted its enhanced efforts to combat CSAM material, though experts have refuted its data, and the impact of such.
In this context, it’s also worth noting that Musk initially refused to make any changes to Grok in response to the nudification controversy, despite the claims that images of children were being stripped down by the AI tool. Musk’s first response was to label this as an effort to attack X due to its free speech approach, which runs counter, Musk claims, to mainstream media coverage, which therefore makes the platform a danger to established media narratives.
X did eventually restrict the use of Grok for this purpose, though reports suggest that Grok will still strip down images of people if you ask it in the right (or wrong) way.
The Grok nudification controversy is already set to cost X a heap in fines, with separate investigations underway in the U.S., EU, Ireland and Spain.
The more specific accusation that X enabled the distribution of CSAM content is even more concerning, and will likely raise more regulatory eyebrows around the world.
And with Musk also taking every chance that he can to criticize regulators and politicians, he hasn’t exactly ingratiated himself with the people who will now be able to issue penalties against the app.
It’s another harmful story for Musk’s evolving AI experiment, which itself is currently going through a rebuild, while it could also lead to further restrictions on the use of the X app.