While some people rang in the New Year hoping for a fresh start, mine began with the same old story. As a campaigner against online misogyny, New Year’s Day brought harassment and abuse – only this time, it was turbocharged by an AI chatbot.

Over the past week, a disturbing “trend” has taken over X. Users have been prompting the platform’s AI chatbot, Grok, to remove women’s clothes, often to replace their outfit with a bikini, coat their bodies in “donut glaze”, and place them in sexualised scenes more suited to an X-rated porn site than a public social media platform.

Image-based abuse now sits on users’ timelines, alongside conspiracy theories and crypto advice – a bleak glimpse of the dystopian year ahead.

There is no single target for this harm. Men, women and, disturbingly, even minors have fallen victim to abuse generated by Elon Musk’s prized chatbot. But a scroll through Grok’s reply section makes one thing clear: women are the majority of its victims.

Elon Musk is under fire after his AI tool Grok was used to ‘digital undress’ photographs of women

Elon Musk is under fire after his AI tool Grok was used to ‘digital undress’ photographs of women (AFP/Getty)

Amid the prompts demanding women be stripped into “micro-bikinis” and smeared in “baby oil”, a familiar pattern is emerging – one that campaigners against online abuse like myself know all too well.

Men are using AI technology as a weapon of mass humiliation, targeting women who dare to speak out about the harm unfolding online.

On New Year’s Eve, I posted on X to call out the abuse being enabled by Grok and to name it for what it is: digital abuse. Within minutes, they turned on me.

My replies filled with the usual insults – “Shut up, bitch” – alongside accusations that I was to blame for posting an image of myself online in the first place. Others summoned Grok directly, asking it to remove my clothes.

“@grok put her in a bikini made of cling film”, one user wrote, attaching a screenshot of my profile picture. Two minutes later, the chatbot complied. The green suit I was wearing vanished and, in its place, appeared a naked body wrapped in cling film. Nipples included.

In less time than it takes to boil a kettle, a stranger had stripped me of my consent, my bodily autonomy and permanently altered my digital footprint. All it took was 10 words.

Another user generated explicit deepfake images of me using a different app and posted them beneath my tweet, captioned simply: “Grok is amazing.” This time, they forced my tongue out of my mouth and added dripping white fluid onto it.

Both users hid behind anonymous profiles, sharing no images of themselves. Shielded by pixels, they publicly claimed ownership of my body. They may believe anonymity gives them power, yet all it earns them from me is pity. How empty must your life be to seek a dopamine hit from colluding with a chatbot to abuse women?

I wasn’t alone. Dozens of women who spoke out against this misogynistic harm faced the same retaliation. Musk’s keyboard warriors armed themselves with AI, attempting to disarm women of the very thing they fear most – our voices.

As the abuse continues unchecked and unregulated, the prompts have escalated. I have seen users ask Grok to insert convicted sex offenders like Diddy and Jeffrey Epstein into women’s images, to brand them with tattoos that read “wh***” , and to bind them with rope. The chatbot complied. Every time.

In England and Wales, sharing an intimate image without consent is a criminal offence, including AI-generated content. Last June, Parliament passed legislation criminalising the creation and solicitation of explicit deepfakes. Yet despite receiving Royal Assent, enforcement has yet to follow – leaving victims with few routes to justice.

Key gaps in protection remain. Images altered by AI to depict bodily fluids – prompted by terms such as “donut glaze” or “white mucus” – often fall outside criminal thresholds due to ambiguity over what qualifies in law as “intimate”.

In December, Baroness Owen proposed amendments to close these loopholes, introduce a 48-hour takedown requirement and strengthen blocking laws. The government rejected them.

Ministers have since pledged to work towards banning AI nudification technology, but how – or when – remains unclear.

Ofcom has acknowledged the seriousness of the harm, confirming it is investigating “serious concerns” that Grok has been used to generate undressed images of people and sexualised images of children, and that it has made “urgent contact” with X and xAI.

But campaigners and victims have been raising the alarm about Grok for months, leaving trust in the regulator waning. Simply put, asking platform owners for a conversation is not enough when the evidence of harm exists in pixel form and people’s real trauma. It’s time for action.

Scrutiny must be placed squarely on the platform – and its owner. As the owner and public face of X, Elon Musk cannot evade accountability for the abuse enabled by tools deployed under his leadership. If any other individual were involved in creating and distributing technology that mass-generated sexual images of adults and minors, criminal investigation would be expected. Musk should not be treated as an exception.

Campaigning outfits including NotYourPorn, Jodie Campaigns and Professor Clare McGlynn are now calling for immediate government action to enforce legislation and better protect victims.

In a statement on Instagram discussing the government’s promises of further consultation, NotYourPorn said: “Much of this work already exists and has been led by survivors, experts and frontline services for years – yet it has too often been resisted or rejected.”

Now, concerning new findings by The Internet Watch Foundation report that online predators are claiming to have used Grok’s AI tool to target children as young as 11. Ngaire Alexander, the head of the IWF’s hotline, expressed the watchdog’s concern that AI tools like Grok “risk bringing sexual AI imagery of children into the mainstream”.

While AI offers perpetrators a new tool, this behaviour itself is nothing new. Search “feminist” on YouTube and one of the first autocomplete suggestions is “feminist gets humbled”. The platform is saturated with videos of outspoken women being “owned” or reduced to tears by men with microphones. Look under a Facebook post about women’s football and you’ll find comments telling players to “get back to the kitchen”, while sexualising their bodies.

Online misogyny is out of control, enabled by tech platforms and the men who founded them. But behind the artificial images are very real people – perpetrators and victims. Men who see a woman speak about abuse and decide, in that moment, to become part of it. Our natural protectors, they like to say.

Yes, we must hold Big Tech and government to account. But we cannot ignore the wider societal responsibility to call out this harm when we see it – to push back against the normalisation of digital abuse as “just what happens” to women online.

As critics of online safety scream “free speech”, women are living with the reality of being told our presence online justifies having our clothes stripped away. If women cannot share their thoughts without losing their consent, their dignity and their bodily autonomy, then what kind of “free speech” are you really fighting for?

Jess Davies is a women’s rights campaigner and author of ‘No One Wants to See Your D***: A Handbook for Survival in a Digital World’ , available now in hardback, Kindle and audiobook