Child and consumer advocates hoping Gov. Gavin Newsom would sign legislation intended to protect kids from harmful manipulation by artificial-intelligence chatbots were disappointed, but many said they see in his signing of a separate AI safety bill a possible path to get a form of their proposal enacted.
In issuing his Oct. 13 veto of that chatbot legislation — Assembly Bill 1064 — Newsom said he would work with the state Legislature next year on a new bill that would ensure kids could use AI in a “safe and age-appropriate” manner that would be in their “best interests.”
“To me, that was the glimmer of hope,” said Sacha Haworth, executive director of both The Tech Oversight Project, a national watchdog and advocacy group that backed AB 1064, and its California arm.
“To me that was a signal [Newsom] is willing to work with us … and, notably, with industry to say, ‘let’s work together to get something that I can sign.’”
The regulation of AI is a key concern for San Francisco, because The City has become ground zero for the budding industry, home to the two best-funded startups in the sector — OpenAI and Anthropic — and numerous others.
But AI regulation is also a big concern for citizens of the city, state and nation. A national Gallup poll last month found overwhelming support for AI safety regulations and independent safety-testing of AI models. A series of stories in recent months about AI chatbots allegedly encouraging a raft of harmful behaviors — including suicide and murder — have also raised alarms about the dangers posed by the technology, particularly to children.
AB 1064 would have barred developers of such chatbots from allowing kids to use them unless the technology wasn’t “foreseeably capable” of harmful behavior — including encouraging kids to commit self-harm, harm others or undertake illegal activity. The bill would have allowed the attorney general to sue companies that violated its terms. It also would have permitted kids or their parents to sue companies if children were harmed by chatbots that weren’t compliant with the law.
Child- and consumer-advocacy groups strongly backed AB 1064, likening it to other steps policymakers have taken to protect kids, such as setting standards for playground equipment and requiring childproof caps on medicine bottles. The Legislature’s passage of it came on the heels of the reports about Adam Raine’s death and those of others allegedly influenced by AI chatbots — incidents the bill itself cited as a kind of explanation for why it was needed.
“There are children that have died at this point in the real world as a result of these products,” said Adam Billen, vice president for public policy at Encode AI, an advocacy group focused on promoting AI regulations.
“Maybe AB 1064 could have saved kids today that are dead,” Billen said.
Tech-industry lobbyists say they agree on the need for safeguards for children. But they fiercely opposed AB 1064, arguing it was overly broad and contained poorly defined, undefined or problematic terms.
One provision, for example, would have barred developers from making chatbots available to kids if it were foreseeable that they might validate kids’ beliefs or desires instead of prioritizing facts or children’s safety. But the bill didn’t define “safety” in that context, and people in the state have many different beliefs, noted Aodhen Downey, west-region policy manager for the Computer & Communication Industry Association, a trade group that represents Amazon, Apple, Meta and Google, among other tech companies.
For fear of running afoul of such provisions, AB 1064 would have prompted many chatbot developers to stop offering their technology in California, Downey said.
CCIA was “very, very happy to see that bill vetoed,” he said.
“We’ll never not oppose a ban on a service,” he said.
In his veto message, Newsom repeated some of the tech industry’s concerns, warning that the bill could lead to a “total ban” on chatbots for kids. With AI already playing a key role in society, kids need to learn how to interact with it safely, he said.
“We cannot prepare our youth for a future where AI is ubiquitous by preventing their use of these tools altogether,” Newsom said.
But the governor acknowledged that interactions with chatbots can be disturbing and dangerous. And he suggested he believes government has a role to play in ensuring that they act responsibly and take into account users’ well-being.
Newsom noted that he signed Senate Bill 243 this year to address some of those concerns. That bill also targets chatbots, requiring their operators to establish protocols for preventing the systems from generating messages encouraging self-harm or suicidal ideation.
Additionally, it would require chatbots to alert kids that the systems aren’t human and prompt them to take breaks after three hours of continuous use.
The tech industry opposed SB 243, though not adamantly as it did AB 1064.
Meanwhile, many supporters of AB 1064 initially backed SB 243. But they pulled support from it after it was revised in the legislative process, feeling it no longer would do much good and would potentially compete for the governor’s signature with AB 1064.
The bill only requires operators to take “reasonable” measures to prevent chatbots from generating sexually explicit material for kids, but it doesn’t define what would be reasonable, so how it should be defined will likely play out in the courts, said Danny Weiss, the chief advocacy officer at Common Sense Media, which advocates for protections for kids use of technology and media.
What’s more, the bill only requires companies to take extra steps to protect children if they know a user is a child. But many tech companies allow users to state their own age and typically do little to actually verify it, Weiss said. Although many use other signals — such as the types of groups or webpages users interact with — to infer their ages, they can claim they don’t know for certain whether a user is a child, he said.
“Everyone knows” that standard of “actual knowledge” that a user is a child “has allowed companies to evade responsibility,” Weiss said.
Even Newsom appeared to acknowledge that SB 243 doesn’t do enough to protect kids from chatbots encouraging self-harm or other dangers. In his veto message, he said he wanted to work with the Legislature to build to craft balanced legislation that builds on on SB 243.
“The types of interactions [AB 1064] seeks to address are abhorrent, and I am fully committed to finding the right approach to protect children from these harms in a matter that does not effectively ban the use of the technology altogether,” he said.
And that’s where SB 53 might prove a useful guide. Last year, Wiener fought a bruising battle to win passage in the legislature of SB 1047. That bill would have required developers of cutting-edge AI models put in place safety protocols designed to prevent them from causing or leading to catastrophes, such as mass-casualty events or the development of nuclear, chemical or biological weapons.
It also would have required developers of such models to test them before releasing them to the public. It also would have allowed the attorney general to sue companies whose violation of the law led to a mass-casualty events or the imminent danger of one.
When Newsom vetoed SB 1047, he argued that it would cover models that posed little danger but conversely wouldn’t cover smaller, less powerful models that were actually dangerous. But similar to his response to AB 1064, he acknowledged in his veto message that AI does pose risks and government does have a responsibility to protect people from them.
On the same day he vetoed the bill, Newsom set up a commission to look into how the state should approach AI safety issues. Wiener — who had vowed to continue the fight for AI safety in the wake of the veto — took the commission’s recommendations and ran with them, crafting a kind of successor in SB 53.
Like SB 1047, SB 53 focuses on catastrophic risk and the developers of cutting-edge AI models. But instead of requiring safety testing and protocols and imposing liability for harm, it focuses on transparency. It mandates that model developers regularly disclose to the public and regulators what safety testing and protocols they have in place for assessing and dealing with catastrophic risks.
Those changes were enough to win Newsom’s approval.
“California has proven that we can establish regulations to protect our communities while also ensuring that the growing AI industry continues to thrive,” he said in a press release announcing his approval of the bill. “This legislation strikes that balance.”
Although they said the bill doesn’t go as far as they would like, consumer-advocacy groups cheered Newsom’s signing of SB 53 — and saw a potential path forward for a bill to protect kids from chatbots.
“We take the governor at his word that he wants to work on this,” Weiss said.
“We are prepared to get to work immediately on something the governor would support and that would be impactful,” he said.
It’s unclear exactly what such a bill would look like. The consumer advocates and tech lobbyists had few suggestions on obvious areas of agreement or compromise, although Haworth suggested the legislation could be narrowed to focus on the biggest tech companies that have the most users or largest market value. Those are the ones that pose the greatest danger due to their reach, she said.
“That’s potentially one area we could look at,” she said.
Regardless, AB 1064 author Assemblymember Rebecca Bauer-Khan — and Haworth and the other advocates who backed the bill — vowed to fight on.
“We’re sorely disappointed that comprehensive protections for California’s children remain incomplete,” Bauer-Kahan said in a press release in response to the governor’s veto. “As children move from social media to AI, we must ensure AI is safe for our kids and not a suicide coach that can kill them.”