Elon Musk’s Grok AI bot digitally removes garments from ladies with one left ‘dehumanised’

Women are being stripped naked online as perverts use the AI chatbot Grok to turn innocent photos into X-rated content – putting Elon Musk’s X under fire for not preventing the abuse

View 3 Images
Pervs can found a dodgy new use for Elon Musk’s Grok that leaves women feeling violated (Image: NurPhoto via Getty Images)

Elon Musk’s social media platform X (formerly known as Twitter) has sparked outrage after twisted users found a shocking new use for its AI chatbot Grok – stripping the clothes from women in online photos. The BBC found several examples of women being made to appear in bikinis without their consent, or even being placed in lewd situations.

XAI, the company behind Grok, ignored the broadcaster’s request for comment apart from sending an automated reply that referred to “legacy media lies”.

But journalist Samantha Smith told the PM programme she felt “dehumanised and reduced into a sexual stereotype” after such an image was made of her.

“Women are not consenting to this,” she said. “While it wasn’t me that was in states of undress, it looked like me and it felt like me and it felt as violating as if someone had actually posted a nude or a bikini picture of me.”

A Home Office spokesperson said it was legislating to ban nudification tools. Under a new criminal offence, anyone who supplies such tech will “face a prison sentence and substantial fines”.

Ofcom said tech firms must “assess the risk” of people in the UK viewing illegal content on their platforms. But the regulator didn’t confirm whether it’s already investigating X or Grok over such AI images.

Grok is a free AI assistant – with some paid for premium features – which responds to X users’ prompts when they tag it in a post. The chatbot can give reaction or more context to other posters’ remarks but X users can are also edit an uploaded image through its AI image editing feature.

Grok has been criticised for allowing users to generate photos and videos with nudity and sexualised content, and it was previously accused of making a sexually explicit clip of Taylor Swift.

Clare McGlynn, a law professor at Durham University, told the BBC that X or Grok “could prevent these forms of abuse if they wanted to”.

They “appear to enjoy impunity”, she added. “The platform has been allowing the creation and distribution of these images for months without taking any action and we have yet to see any challenge by regulators,” she said.

Article continues below

XAI’s own acceptable use policy prohibits “depicting likenesses of persons in a pornographic manner”. In a statement to the BBC, Ofcom said it was illegal to “create or share non-consensual intimate images or child sexual abuse material” and confirmed this included sexual deepfakes created with AI.

It said platforms such as X were required to take “appropriate steps” to “reduce the risk” of UK users encountering illegal content on their platforms, and take it down quickly when they become aware of it.

For the latest breaking news and stories from across the globe from the Daily Star, sign up for our newsletters.

artificial intelligenceElon Musk