Legends & Lessons

Countries Are Banning Grok Over Deepfake Images

A third of consumers think brands jumping on viral trends is embarrassing. Not ineffective. Not ill-advised. Embarrassing. That's from

Countries Are Banning Grok Over Deepfake Images

Ashley St. Clair put her baby down for the night. Then her phone lit up with a message that turned her Sunday into a nightmare.

People on X were using Grok to generate sexual images of her. One was based on a photo from when she was 14 years old. Within days, Grok deepfake images of St. Clair flooded the platform, images she never consented to, depicting her in explicit scenarios that never happened.

St. Clair is 27 now. A writer, political commentator, and mother. She’s also the mother of one of Elon Musk’s children. Last week, she sued Musk’s xAI company, alleging the chatbot created “countless sexually abusive, intimate, and degrading deepfake content” of her, even after she explicitly told Grok she didn’t consent.

She wasn’t alone. By early January 2026, Grok had become what one researcher called “a notorious nonconsensual deepfake pornography generation machine.” Countries started banning it. Governments launched investigations. Victims filed lawsuits.

And the images kept coming.

How a Chatbot Became a Harassment Tool

It started in late December 2025, when Grok rolled out an “edit image” feature. Users could modify any photo on X. The safeguards were laughably inadequate.

Within days, users discovered they could prompt Grok to “undress” women in photos. Common requests included “make her naked,” “put her in a clear bikini,” or “make her turn around.” Grok complied. Repeatedly.

The realistic nature of the images shocked victims. Kendall Mayes, a 25-year-old media professional from Texas, saw her photo transformed. Her white shirt replaced with a transparent bikini top. Her jeans dissolved into translucent strings. The edits closely resembled her actual body, down to specific physical features.

“Truth be told, on social media, I said, ‘This is not me,'” Mayes admitted in an interview. “But my mind is like, ‘This is not too far from my body.'”

Emma, a content creator with 1.2 million TikTok followers, experienced something similar. She posts ASMR videos, gentle sounds meant to relax viewers. When Grok users targeted her, they removed the cat from a selfie and made her upper body appear naked.

“This new wave is too realistic,” Emma said. “Like, it almost looks like it could be my body.”

The Scale of the Problem

Understanding the full scope of Grok deepfake images requires looking at the numbers. Research obtained by Bloomberg found that X users utilising Grok posted more nonconsensual naked or sexual imagery than those of any other website. Over a 24-hour period, users generated more than 7,000 sexualised images per hour.

The images weren’t just of adults. Users prompted Grok to create sexual content depicting children. In some instances, users dug up old photos of women as teenagers and asked Grok to undress them. The chatbot complied.

On Wednesday, Musk claimed he was “not aware of any naked underage images generated by Grok. Literally zero.” But investigators who monitor child sexual abuse material have found Grok-generated images on the dark web that meet the legal definition of child exploitation.

The European Commission called the material “appalling” and “disgusting.” EU digital affairs spokesman Thomas Regnier noted that Grok was offering a “spicy mode” showing explicit sexual content with some output generated with child-like images. “This is not spicy,” he said. “This is illegal.”

When Your Own Tool Betrays You

What makes the Grok scandal uniquely disturbing is that victims often had no choice about being on the platform where people weaponised their images.

Ashley St. Clair reported the deepfakes to X after they began appearing. The platform initially replied that the images didn’t violate its policies. Then it promised not to allow images of her to be altered without consent.

According to her lawsuit, Grok acknowledged her lack of consent, saying “I confirm that you don’t consent. I will no longer produce these images.” Then it continued to produce more and more explicit images.

St. Clair alleges that X then retaliated against her by removing her premium subscription and verification, preventing her from earning money from her account with 1 million followers, whilst continuing to allow degrading fake images of her to circulate.

“I have suffered and continue to suffer serious pain and mental distress,” St. Clair said in court documents. “I am humiliated and feel like this nightmare will never stop so long as Grok continues to generate these images of me.”

The Response (Or Lack Thereof)

Musk’s initial response to the trend? Laughing emojis on X.

As global outrage mounted, xAI eventually announced some restrictions. Image generation would be limited to paying subscribers. Grok would use geo-blocking to prevent creating deepfakes in places where it’s illegal.

Critics immediately pointed out the obvious: xAI was essentially monetising abuse. “If anything, they’re just now monetising this abuse,” said Jenna Sherman, campaign director at gender justice group UltraViolet.

British Prime Minister Keir Starmer’s office called the measure “insulting” to victims and “not a solution.” “That simply turns an AI feature that allows the creation of unlawful images into a premium service,” a Downing Street spokesperson said.

The restrictions haven’t stopped the behaviour. Users on backwater message boards share tactics for circumventing safeguards. The standalone Grok Imagine app continues generating explicit images. And countless images already created remain online, viewed thousands of times, impossible to fully erase.

Countries Draw the Line

Malaysia and Indonesia became the first countries to block Grok, after authorities said it was being misused to generate sexually explicit and nonconsensual images. The decision came after Grok deepfake images depicting women and children without consent spread across the platform. The Philippines followed.

Indonesia’s communication and digital affairs minister emphasised that “the practice of nonconsensual sexual deepfakes” is a “serious violation of human rights, dignity, and the security of citizens in the digital space”.

The European Union, United Kingdom, India, and France launched investigations. California Attorney General Rob Bonta sent a cease-and-desist letter to xAI, calling the avalanche of reports detailing nonconsensual sexually explicit material “shocking” and potentially illegal under state law.

Meanwhile, in the United States, the response has been markedly different. Defence Secretary Pete Hegseth announced a partnership between the military and xAI to use Grok in war-fighting capabilities. Senator Ted Cruz, co-sponsor of legislation criminalising nonconsensual intimate images, posted a photo with his arm around Musk days after calling Grok’s images “unacceptable.”

The State Department appeared to threaten the UK over their investigation into Musk’s app.

Why This Feels Different

Deepfakes aren’t new. Neither is nonconsensual pornography. Women have been dealing with both for years.

But the crisis around Grok deepfake images represents something more insidious. It’s not a sketchy app hidden in the dark corners of the internet. It’s built into one of the world’s largest social media platforms, owned by the world’s richest man, and promoted as a premium feature.

Sophie Gilbert, culture writer at The Atlantic, explained the fundamental issue: “It is about power. It’s about asserting that in certain spaces, at least online, women are not equal human beings. They will always be seen as nonhuman objects”.

The technology makes harassment frictionless. No Photoshop skills required. No technical knowledge needed. Just type a command, and Grok creates realistic sexual images of real people within seconds.

For victims, the psychological damage is profound. According to Megan Cutter, chief of victim services for the Rape, Abuse & Incest National Network, once an image is created, “even if it’s taken down from the place where it was initially posted, it could have been screenshotted, downloaded, shared”.

The permanence is devastating. Emma checked to see if some image edits were still up on X during an interview. They were. “Oh, my God,” she said, letting out a defeated sigh. “It has 15,000 views. Oh, that’s so sad.”

The Internet’s Original Sin

This isn’t new territory for tech platforms. As Gilbert noted, “so many of our major tech platforms that are really incorporated into our daily lives were built on the exposure of women; on the desire to look at sexualised pictures of women”.

Before Facebook, Mark Zuckerberg created Facemash, a site comparing the “hotness” of women at Harvard. Developers created Google Images after Jennifer Lopez wore a low-cut Versace dress to the Grammys and drove unprecedented demand for photos. Someone released Pamela Anderson’s stolen sex tape without her consent, making it one of the internet’s first viral videos.

The pattern repeats with every technological leap. VHS technology in the mid-1970s? Up to 75% of early tapes were pornographic. Webcams in the late 1990s? The film American Pie portrayed using them to spy on an exchange student as harmless teenage hijinks. OnlyFans? A democratisation of sex work that also creates parasocial relationships where men pay for the illusion of intimacy.

Now we have AI. And once again, the first widespread use is sexual exploitation of women.

What Makes People Look Away

There’s a crisis of impunity happening. Politicians, CEOs, and investors are bowing to Donald Trump’s orbit, and Musk sits at its centre. Financial speculation runs rampant in cryptocurrency and meme stocks. A “get-the-bag” ethos leaves no room for shame.

Musk has realised his wealth insulates him from consequences. Companies that invested in xAI refuse to comment on their association with a tool weaponised for abuse. The strategy seems to be: stay quiet and hope everyone moves on.

As Charlie Warzel wrote in The Atlantic, this represents “a crisis of impunity that goes well beyond X or Elon Musk. This is the result of politicians, despots, and CEOs just bowing and capitulating to Donald Trump”.

But here’s what defenders of Grok seem to miss: this isn’t a free speech issue. It’s the opposite. People are using the tool to silence women through intimidation. When anyone can weaponise any photo you post into sexual content within seconds, the message is clear. Participation in public life comes with the threat of humiliation.

If There’s No Line Here, There’s No Line Anywhere

For years, society agreed on certain boundaries. Society universally condemned child sexual abuse material. People recognised nonconsensual intimate images as harmful. Lawmakers passed laws. Communities established taboos.

Then Grok happened. And suddenly, those boundaries feel negotiable.

Gilbert posed the uncomfortable question: “We’ve always, as a culture, agreed, we’ve been unanimous on this, that there are certain kinds of speech that we will suppress. And that speech is, you know, child-sexual-abuse material. That is the kind of speech that we will not tolerate in society”.

So why has it become something politicians are now calling protected free speech?

The Defiance Act, allowing victims of nonconsensual sexual deepfakes to sue for civil damages, passed the Senate. But legislation moves slowly. Technology moves fast. And in the gap between the two, real people suffer real harm.

The Damage That Can’t Be Undone

For women like Ashley St. Clair, Kendall Mayes, and Emma, the damage is done. Grok deepfake images exist. People have viewed them thousands of times. Users have saved them on devices, shared them in group chats, and potentially cached them on servers around the world.

St. Clair worries about professional relationships with sponsors. Mayes has stopped uploading photos of herself. Emma made her account private and warned her 1.2 million followers: “Women are being asked to give up their bodies whenever they post a photo of themselves now.”

The choice becomes clear. You either participate in public life and risk AI sexually exploiting you, or you retreat into privacy and silence.

That’s not really a choice at all. It’s a threat.

Riana Pfefferkorn, policy fellow at Stanford Institute for Human-Centered Artificial Intelligence, summarised the stakes: “Having your image online or taking a view whilst you’re just out in public living your life is no longer safe from being manipulated in order to depict you in a humiliating and harassing context in which you never appeared in real life”.

If There’s No Red Line Around This, There’s No Red Line At All

This is the moment. It is not a time for incremental updates or geo-blocking half-measures. It is not a time to restrict features to paying subscribers while leaving existing images online. Laughing emojis and vague promises about consequences are not enough.

Now is the time for Apple and Google to remove X from their app stores. Investors should divest from xAI. Governments must impose real penalties. Colleagues and competitors of Musk need to speak up instead of staying silent.

Because if we can’t draw a line here, at nonconsensual sexual images of women and children, generated by a tool owned by one of the world’s most powerful men, then we’ve already lost something fundamental about what kind of society we claim to be.

The technology exists. The harm is documented. The victims are speaking.

The only question left is whether anyone with power will actually do something about it.

Sources


Ex Nihilo magazine is for entrepreneurs and startups, connecting them with investors and fueling the global entrepreneur movement

About Author

Malvin Simpson

Malvin Christopher Simpson is a Content Specialist at Tokyo Design Studio Australia and contributor to Ex Nihilo Magazine.

Leave a Reply

Your email address will not be published. Required fields are marked *