X’s Grok app has been reinstated in Indonesia, after it was recently banned for generating sexualized images of people without their knowledge or consent.
In early January, in response to the Grok nudification trend on X, Indonesia’s Communications Ministry threatened to ban both X and the separate Grok app if concerns related to “degrading pictures of women and children” were not addressed.
A few days later, the ministry followed through on that threat, by banning the Grok app entirely, and restricting access to X. But now, after assurances from X that the issue has been addressed, and that users will no longer be allowed to generate non-consensual sexualized images via the AI bot, Indonesia has announced that it’s lifting its ban action, which will enable X to continue operating its platforms in the nation.
As reported by The New York Times:
“Indonesia’s Ministry of Communication and Digital Affairs said in a statement on Sunday that the ministry had received a letter from X Corp ‘outlining concrete steps for service improvements and the prevention of misuse.’ The ban will be lifted ‘conditionally,’ and Grok could be blocked again if ‘further violations are discovered,’ Alexander Sabar, the ministry’s director general of digital space monitoring, said in the statement.”
Which means that X is now back in action in all South East Asian nations where it’s available, with both Malaysia and the Philippines also recently lifting their bans on the app in response to the nudification controversy.
So, all good, Grok usage has been restricted to ensure that no more non-consensual nude images are being produced, and all’s back to normal. Right?
Well, yes and no.
Yes, in that X has implemented restrictions to stop people from generating offensive images via Grok, at least to some degree. But a question remains as to why X sought to push back on restricting this in the first place, with Musk initially refusing to make any changes to the tool, and framing it as a political witch hunt of sorts.
Musk initially claimed that various other AI tools enabled the generation of deep fake nudes, but no one was going after them, suggesting that the real motivation was to shut X down due to its “free speech” aligned approach.
Which is not accurate, and even if it was, for what possible reason could X want to give people the capacity to generate non-consensual nudes of people, even children, via its AI bot?
That belies Musk’s much-publicized opposition to CSAM content, an element that he made a key focus of his reformation at Twitter when he took over the app. Musk repeatedly claimed that previous Twitter management had not done enough to combat CSAM content, and that he would make this his “#1 priority” in his time as chief.
And Musk’s new management team did provide some data notes, which suggested that they had improved the platform’s efforts on this front. But more recent reports indicate that CSAM content is now more prevalent in the app than ever on X, while the company has also ended its contract with Thorn, a nonprofit organization that provides technology that can detect and address child sexual abuse content (Thorn says that X stopped paying its invoices).
And then, there’s the Grok deepfakes, which had enabled users to generate thousands of sexualized images in the app every day, including, again, images of children.
And Elon, for a time at least, defended this functionality, and sought to deflect criticism of it an option.
Why? I don’t know, it makes no sense, there’s no reason why anybody could need this as a function. Yet, driven by his passion to make his AI the most used generative AI option on the market, Musk refused, initially, to make a change, even though he could.
Worth noting, also, that Musk recently bragged that Grok is now generating more images and video than all other AI tools combined. Which, for one, there’s no way he can viably claim, as he doesn’t have access to data on the outputs of other engines. But also, I wonder why that is? Could it be because of the thousands of fake nudes that X users have been creating?
It’s confusing to me that anyone could see this as being in alignment with Elon’s previous declarations of a no-tolerance approach to CSAM content, or that Elon really values this as a key focus.
Progress, it seems, remains his guiding star, at the expense of all else if need be, while his constant re-framing of everything as a political flash point is making it increasingly difficult to side with him in the name of measured development.