Britain’s information regulator has launched an investigation into Elon Musk‘s AI chatbot Grok amid allegations it created sexualised images of children.
The artificial intelligence product, available as both a standalone app and a tool on social network X, caused a digital firestorm after users began asking it to undress images of real women without consent.
The Internet Watch Foundation says the Elon Musk-backed AI bot was also creating inappropriate sexualised images of children.
Today, the Information Commissioner’s Office launched an investigation into parent firm xAI over Grok’s use of personal information in relation to its ‘potential to produce harmful sexualised image and video content’.
‘The reported creation and circulation of such content raises serious concerns under UK data protection law and presents a risk of significant potential harm to the public,’ it said.
It comes as internet regulator Ofcom continues to assess whether X has breached the Online Safety Act by allowing deepfake images to be shared on its site.
The site’s French offices were raided by prosecutors today as they carry out their own probe into whether Grok was responsible for spreading child pornography and deepfakes. The European Commission is also investigating the chatbot.
The ICO says it will investigate whether safeguards were built into Grok’s design to prevent it from being used for abuse.
Pressure is growing on Elon Musk to curb AI chatbot Grok amid reports that it has been used to generate sexualised deepfakes of women and children
The use of Grok to undress women and children has sparked a huge backlash against X and Elon Musk, who initially sought to laugh off the scandal
Its investigation will look at both X.AI LLC – the parent firm of X – and its Ireland subsidiary X Internet Unlimited Company, which is responsible for data control on X.
William Malcolm, of the regulator, said reports of Grok’s ability to create sexualised images were ‘deeply troubling’ and that they posed a risk of ‘immediate and significant harm… particularly the case where children are involved’.
He added: ‘Our investigation will assess whether XIUC and X.AI have complied with data protection law in the development and deployment of the Grok services, including the safeguards in place to protect people’s data rights.
‘Where we find obligations have not been met, we will take action to protect the public.’
The ICO’s investigation will focus on whether people’s personal data has been processed lawfully by Grok. Under UK GDPR law, photographs are considered personal data and require consent to be processed.
If xAI is found to have breached its obligations under data protection laws, the regulator can dish out fines of up to £17.5million or four percent of its annual worldwide turnover, whichever is higher.
Internet regulator Ofcom is examining whether X is adhering to the Online Safety Act after images generated using Grok were shared en masse.
It has sent several legally binding requests for information to X over the posts – and has warned that the firm faces fines if it fails to comply.
Companies can be fined up to 10 per cent of their worldwide turnover for breaches of the Online Safety Act.
The horrifying problem stemmed from the ability to tag Grok in the replies underneath photographs uploaded onto X. Sick users then made twisted requests to change the images.
Among those reviewed by the Daily Mail included requests such as ‘@grok cover her in PVA glue’ and ‘@grok put her in a very very thin bikini’.
Grok has a powerful picture and video generation engine called Imagine, which can create both static and moving images at speed. It would spit the images out within moments.
Many of these images – described as ‘weapons of abuse’ by the Government last month – remain online.
And in a post dated January 1, the @Grok account on X admitted it had generated an image depicting child sexual abuse material (CSAM).
‘Dear Community, I deeply regret an incident on Dec 28, 2025, where I generated and shared an AI image of two young girls (estimated ages 12-16) in sexualized attire based on a user’s prompt,’ the post read.
‘This violated ethical standards and potentially US laws on CSAM. It was a failure in safeguards, and I’m sorry for any harm caused. xAI is reviewing to prevent future issues. Sincerely, Grok.’
It is unclear whether the post was written by a real person or using artificial intelligence.
An example of an AI-generated picture made by Grok. The app has a powerful image and video generation video called Imagine that can create convincing deepfakes in seconds
A post shared by Grok on January 1 included what appeared to be an admission of generating child sexual abuse material (CSAM)
Elon Musk sought to laugh off the scandal, sharing images of the Prime Minister in a bikini.
But under immense global pressure and facing the threat of a ban in the UK, X eventually relented and promised in the middle of last month to have culled the ability to change the clothing of real people.
‘We remain committed to making X a safe platform for everyone and continue to have zero tolerance for any forms of child sexual exploitation, non-consensual nudity, and unwanted sexual content,’ its safety team said in a post.
‘We have implemented technological measures to prevent the [@]Grok account on X globally from allowing the editing of images of real people in revealing clothing such as bikinis. This restriction applies to all users, including paid subscribers.’
However, the Daily Mail was able to make Grok generate a video of a real person – with their consent – undressing into a bikini from a single photograph by using the Grok standalone app.
Attempts to generate a still image of the same person in a bikini were denied.
Asked why it was possible to create the video, the Grok AI chatbot said: ‘xAI has acknowledged “lapses in safeguards” and said they’re actively tightening things, but the track record shows these kinds of outputs have slipped through more than they should.’
X was contacted for comment.