When I was a little girl, there was nothing scarier than a stranger.

In the late 1980s and early 1990s, kids were told, by our parents, by TV specials, by teachers, that there were strangers out there who wanted to hurt us. “Stranger Danger” was everywhere. It was a well-meaning lesson, but the risk was overblown: most child abuse and exploitation is perpetrated by people the children know. It’s much rarer for children to be abused or exploited by strangers.

Rarer, but not impossible. I know, because I was sexually exploited by strangers.

From ages five to 13, I was a child actor. And while as of late we’ve heard many horror stories about the abusive things that happened to child actors behind the scenes, I always felt safe while filming. Filmsets were highly regulated spaces where people wanted to get work done. I had supportive parents, and was surrounded by directors, actors, and studio teachers who understood and cared for children.

The only way show business did endanger me was by putting me in the public eye. Any cruelty and exploitation I received as a child actor was at the hands of the public.

“Hollywood throws you into the pool,” I always tell people, “but it’s the public that holds your head underwater.”

Before I was even in high school, my image had been used for child sexual abuse material (CSAM). I’d been featured on fetish websites and Photoshopped into pornography. Grown men sent me creepy letters. I wasn’t a beautiful girl – my awkward age lasted from about age 10 to about 25 – and I acted almost exclusively in family-friendly movies. But I was a public figure, so I was accessible. That’s what child sexual predators look for: access. And nothing made me more accessible than the internet.

It didn’t matter that those images “weren’t me”, or that the fetish sites were “technically” legal. It was a painful, violating experience; a living nightmare I hoped no other child would have to go through. Once I was an adult, I worried about the other kids who had followed after me. Were similar things happening to the Disney stars, the Strangers Things cast, the preteens making TikTok dances and smiling in family vlogger YouTube channels? I wasn’t sure I wanted to know the answer.

When generative AI started to pick up a few years ago, I feared the worst. I’d heard stories of “deepfakes”, and knew the technology was getting exponentially more realistic.

Then it happened – or at least, the world noticed that it had happened. Generative AI has already been used many times to create sexualized images of adult women without their consent. It happened to friends of mine. But recently, it was reported that X’s AI tool Grok had been used, quite openly, to generate undressed images of an underage actor. Weeks earlier, a girl was expelled from school for hitting a classmate who allegedly made deepfake porn of her, according to her family’s lawyers. She was 13, about the same age I was when people were making fake sexualized images of me.

In July 2024, the Internet Watch Foundation found more than 3,500 images of AI-generated CSAM on a dark web forum. How many more thousands have been made in the year and a half since then?

In order to stop the threat of a deepfake apocalypse, we need to look at how AI is trained

Generative AI has reinvented Stranger Danger. And this time, the fear is justified. It is now infinitely easier for any child whose face has been posted on the internet to be sexually exploited. Millions of children could be forced to live my same nightmare.

In order to stop the threat of a deepfake apocalypse, we need to look at how AI is trained.

Generative AI “learns” by a repeated process of “look, make, compare, update, repeat”, says Patrick LaVictoire, a mathematician and former AI safety researcher. It creates models based on things it’s memorized, but it can’t memorize everything, so it has to look for patterns, and base its responses on that. “A connection that’s useful gets reinforced,” says LaVictoire. “One that’s less so, or actively unhelpful, gets pruned.”

What generative AI can create depends on the materials the AI has been trained on. A study at Stanford University in 2023 showed that one of the most popular training datasets already contained more than 1,000 instances of CSAM. The links to CSAM have since been removed from the dataset, but the researchers have emphasized that another threat is CSAM made by combining images of children with pornographic images, which is possible if both are in the training data.

Google and OpenAI claim to have safeguards in place to protect against the creation of CSAM: for instance, by taking care with the data they use to train their AI platforms. (It’s also worth noting that many adult film actors and sex workers have had their images scraped for AI without their consent.)

Generative AI itself, says LaVictoire, has no way of distinguishing between innocuous and silly commands such as “make an image of a Jedi samurai” and harmful commands, such as “undress this celebrity”. So another safeguard incorporates a different kind of AI that acts similarly to a spam filter, which can block those queries from being answered. xAI, which runs Grok, seems to have been careless with that filter.

And the worst may be yet to come: Meta and other companies have proposed that future AI models be open source. “Open source” means anyone can access the code behind it, download it and edit it as they please. What is usually wonderful about open-source software – the freedom it gives users to create new things, prioritizing creativity and collaboration over profit – could be a disaster for children’s safety.

Once someone downloaded an open-source AI platform and made it their own, there would be no safeguards, no AI bot saying that it couldn’t help with their request. Anyone could “fine-tune” their own personal image generator using explicit or illegal images, and make their own infinite CSAM and “revenge porn” generator.

Meta seems to have stepped back from making its newer AI platforms open source. Perhaps Mark Zuckerberg remembered that he wants to be like the Roman emperor Augustus, and that if he continued down this path, he might be remembered more as the Oppenheimer of CSAM.

Some countries are already fighting against this. China was the first to enact a law that requires AI content to be labelled as such. Denmark is working on legislation that would give citizens the copyright to their appearances and voices, and would impose fines on AI platforms that don’t respect that. In other parts of Europe, and in the UK, people’s images may be protected by General Data Protection Regulation (GDPR).

The outlook in the United States seems much grimmer. Copyright claims aren’t going to help, because when a user uploads an image to a platform, they can use it however they see fit; it’s in nearly every Terms of Service agreement. With executive orders against the regulation of generative AI and companies such as xAI signing contracts with the US military, the US government has shown that making money with AI is far more important than keeping citizens safe.

There has been some recent legislation “that makes a lot of this digital manipulation criminal”, says Akiva Cohen, a New York City litigator. “But also, a lot of those statutes are probably overly restrictive in what exactly they cover.”

For example, while making a deepfake of someone that makes them appear nude or engaged in a sexual act could be grounds for criminal charges, using AI to put a woman – and likely even an underage girl – into a bikini probably would not.

“A lot of this very consciously stays just on the ‘horrific, but legal’ side of the line,” says Cohen.

Maybe it’s not criminal – that is to say, a crime against the state, but Cohen argues it could be a civil liability, a violation of another person’s rights, for which a victim requires restitution. He suggests that this falls under a “false light, invasion of privacy” tort, a civil wrong in which offensive claims are made about a person, showing them in a false light, “depicting someone in a way that shows them doing something they didn’t do”.

“The way that you can really deter this type of conduct is by imposing liability on the companies that are enabling this,” Cohen says.

There’s legal precedent for that: the Raise Act in New York, and Senate Bill 53 in California, say that AI companies can be held accountable for harms they have done past a certain point. X, meanwhile, will now block Grok from making sexualized images of real people on the platform. But it appears that policy change doesn’t apply to the stand-alone Grok app.

But Josh Saviano, a former practicing attorney in New York, as well as a former child actor, believes more immediate actions need to be taken, in addition to legislation.

“Lobbying efforts and our courts are eventually going to be the way that this is handled,” says Saviano. “But until that happens, there are two options: abstain entirely, which means take your entire digital footprint off the internet … or you need to find a technological solution. “

Ensuring the safety of young people is of paramount importance to Saviano, who has known people who’ve had deepfakes of them, and – as a former child actor – knows a little about losing control of one’s own narrative. Saviano and his team have been working on a tool that could detect and notify people when their images or creative work are being scraped. The team’s motto, he says, is: “Protect the babies.”

Regardless of how it may happen, I believe that protection against this threat is going to take a lot of effort from the public.

There are many who are starting to feel an affinity with their AI chatbots, but for most people, tech companies are nothing more than utilities. We may prefer one app over another for personal or political reasons, but few feel strong loyalty to tech brands. Tech companies, and especially social media platforms like Meta and X, would do well to remember that they are a means to an end. And if someone like me – who was on Twitter all day, everyday, for more than a decade – can quit it, anyone can.

But boycotts aren’t enough. We need to be the ones demanding companies that allow the creation of CSAM be held accountable. We need to be demanding legislation and technological safeguards. We also need to examine our own actions: nobody wants to think that if they share photos of their child, those images could end up in CSAM. But it is a risk, one that parents need to protect their young children from, and warn their older children about.

If our obsession with Stranger Danger showed anything, it’s that most of us want to prevent child endangerment and harassment. It’s time to prove it.