California may soon lead the nation in regulating powerful AI models. The Legislature passed SB 53, the Transparency in Frontier Artificial Intelligence Act, which would require “frontier” AI developers to publish safety protocols, report critical incidents to the state and uphold whistleblower protections. The bill now sits on Governor Gavin Newsom’s desk, and whether he signs it could determine whether California becomes the first to impose legally binding AI guardrails.
Newsom has publicly suggested support for AI oversight legislation. According to Politico, at the Clinton Global Initiative, he said, “We have a bill that’s on my desk that we think strikes the right balance … We worked with industry, but we didn’t submit to industry. We’re not doing things to them, but we’re not necessarily doing things for them.” He did not specify which measure he meant. Observers widely believe he referred to SB 53. Politico reported that his remarks follow months of AI policy maneuvering, and state senators told reporters they hope he was referring to SB 53. Bloomberg Law reported that Newsom has endorsed stronger oversight of the AI sector and highlighted California’s role in setting early guardrails.
What SB 53 Would Require
Under SB 53, large frontier AI developers are required to publish safety protocols explaining how they test for catastrophic risks and apply mitigations. Developers must also publish transparency reports for new or modified models, summarizing risk assessments, dangerous capability thresholds, intended uses and safeguards.
Critical safety incidents must be reported to California’s Office of Emergency Services within 15 days, or within 24 hours if an imminent threat exists. A catastrophic incident is defined as one causing at least 50 deaths or $1 billion in damage.
The law also protects whistleblowers, requiring anonymous reporting channels and barring retaliation. Starting in 2030, annual independent audits will be mandatory, with summaries sent to the Attorney General. Penalties escalate up to $10 million for repeated catastrophic-risk violations.
Advertisement: Scroll to Continue
Rooted in the Governor’s Expert Commission
SB 53 is closely aligned with the California Report on Frontier AI Policy, released June 17 by an expert working group convened by Newsom. That report prioritized evidence-based oversight, standardized disclosures, incident tracking and protections for insiders over rigid micromanagement. It also advocated for adaptive thresholds and infrastructure support, core ideas that are echoed in SB 53. “The final version of SB 53 will ensure California continues to lead not only on AI innovation, but on responsible practices to help ensure that innovation is safe and secure,” said Senator Wiener, SB53’s sponsor.
AI company Anthropic has publicly endorsed SB 53, stating that it provides clearer expectations and establishes guardrails without imposing rigid mandates. Their support reinforces the perception that the bill strikes a nuanced balance between regulation and innovation.
The Stakes for California and AI Policy
California’s push comes amid growing uncertainty at the federal level. PYMNTS recently reported that Colorado has delayed the implementation of its AI law until June 2026, providing businesses with more time to adjust. PYMNTS has also tracked proposals for AI regulatory sandboxes, a framework that allows firms to test new systems under limited oversight.
California’s AB 2013 takes effect January 1, 2026. The law requires generative AI developers to post summaries of their training data, including sources and collection timelines, creating a baseline of transparency alongside SB 53’s frontier model rules. If Newsom signs SB 53, California would become the first U.S. jurisdiction to impose binding risk rules on AI developers.