Putting Grok back in the bottle? Decoding how X claims it is taming misbehaving AI
X’s sudden effort to civilise Grok is less about ethical enlightenment, but more like a belated damage limitation exercise
After weeks of allowing X users to virtually undress women, often children, using the social media platform’s artificial intelligence (AI) chatbot Grok, Elon Musk’s X and Elon Musk’s X AI claims they’ll stop Grok from responding to user requests to generate images of human subjects in “revealing clothing such as bikinis”. Earlier today, X Safety in a post on X (where else?) insisted they “remain committed to making X a safe platform for everyone and continue to have zero tolerance for any forms of child sexual exploitation, non-consensual nudity, and unwanted sexual content.” It took Elon Musk’s companies quite a while to get here, but they finally seem to have turned the corner towards common sense with a seemingly three point plan.
This comes as Grok has faced tremendous legal scrutiny in many countries following a deluge of non-consensual morphing of images of humans by users on X via the Grok chatbot. All it really needed was a photo and a prompt that told Grok (by tagging @Grok) “put her in a bikini” or something similar. Despite a rather bullish tone by X Safety in trying to convince everyone that Grok will now behave, it is not clear whether these image editing tools via prompts will remain available on the Grok chatbot app or the website. Many countries were contemplating banning Grok or X altogether, if reports were to be believed. Over the weekend, Malaysia and Indonesia led the way, as the first countries to ban the Grok AI tool in their geographies.
“Technological measures”
The first, X says, has to do with changes to how Grok is accessible to X users. “We have implemented technological measures to prevent the Grok account from allowing the editing of images of real people in revealing clothing such as bikinis. This restriction applies to all users, including paid subscribers. Additionally, image creation and the ability to edit images via the Grok account on the X platform are now only available to paid subscribers,” explains the post by X Safety account.
Last week, X had made a feeble attempt at damage control, when they restricted this “put her in a bikini” image generation and editing capabilities behind a paywall, that is, limited to X’s paying subscribers. It isn’t difficult to assess how successful (or otherwise) that plan proved to be, in subsequent days. US senators also asked Apple and Google why they haven’t removed the X and Grok apps from their application stores, over sexualised image generation.
Piggyback on local laws
The second part of the latest set of measures to tame Grok includes a geographical location based update — and if read carefully, links Grok’s behaviour and restrictions to local laws. “We now geoblock the ability of all users to generate images of real people in bikinis, underwear, and similar attire via the Grok account and in Grok in X in those jurisdictions where it’s illegal,” the post says. One can wonder if there are any geographies where this behaviour would still be allowed, if considered legal.
All this while, Elon Musk has maintained a bullish stance on Grok and generative AI’s uncontrolled behaviour — would you have expected anything else? In a post yesterday, he maintained, “I not aware of any naked underage images generated by Grok. Literally zero. Obviously, Grok does not spontaneously generate images, it does so only according to user requests.”
Ashley St Clair, the mother of one of Elon Musk’s children, in a conversation with BBC Newshour last week, had noted that Grok had generated sexualised photos of her as a child. St. Clair has since claimed in another post that X has banned her account from subscribing to the Premium plans.
Musk’s excuses didn’t stop there. “When asked to generate images, it will refuse to produce anything illegal, as the operating principle for Grok is to obey the laws of any given country or state. There may be times when adversarial hacking of Grok prompts does something unexpected. If that happens, we fix the bug immediately,” he wrote. Musk was particularly displeased when UK’s Technology Secretary Liz Kendall, who was working closely with the country’s regulatory body Ofcom, said any suggestions to block X in the UK, had the government’s full support. At the time, Musk had grumbled that “They just want to suppress free speech.”
All is good, they say
Finally, X insists that what has happened in the past few weeks and the implementation of this new policy, does not change their existing safety protocols (these were conspicuously missing all this time, mind you) that dictate all AI prompts and generated content that is posted on X follow the content policy guidelines. “However content is created or whether users are free or paid subscribers, our Safety team are working around the clock to add additional safeguards, take swift and decisive action to remove violating and illegal content, permanently suspend accounts where appropriate, and collaborate with local governments and law enforcement as necessary,” X insists.
Of course, as with most instances of awry AI behaviour these days, X does try to shift the blame to this fast paced evolution of generative technology. “The rapid evolution of generative AI presents challenges across the entire industry. We are actively working with users, our partners, governing bodies and other platforms to address issues more rapidly as they arise,” the closing note on that post.
There remain questions on how X will implement this three-pronged policy, with a key element being how Grok’s models will distinguish whether the image shared is of a real person and how to restrict actions when users inevitably try to con the AI. Musk’s own behaviour over the last few days has been less than perfect, having re-shared a user’s post of UK’s Prime Minister Sir Keir Starmer, in a generated bikini.
E-Paper

