Developments in Accountable AI
SB 53: What California’s New AI Safety Law Means for Developers

California’s new Transparency in Frontier Artificial Intelligence Act (SB 53), signed into law in late September 2025, is the United States’ first statute focused squarely on AI safety. In particular, SB 53 is meant to address the possibility that an AI system could cause mass harm or serious economic damage—what is often referred to in AI governance circles as “catastrophic risk.”
Although it targets only the most advanced developers, SB 53 establishes a governance and reporting model that could shape how AI risk is managed across industries. For companies building or deploying powerful models, the message is clear: frontier-model governance is no longer optional.
Frontier Models, Large Developers, and Catastrophic Risks:
SB 53 targets “frontier” AI systems: foundation models trained at extraordinary computational scales. Under the statute, a frontier model is one trained using more than 10²⁶ floating-point operations (FLOPs), including cumulative compute from fine-tuning and subsequent modifications. A “frontier developer” is defined as any entity that “trained or initiated the training” of such a model, and a “large” frontier developer is one with annual gross revenue exceeding $500 million.
To put it simply, the bill reaches the OpenAIs, Anthropics, and Googles of the AI industry—firms whose models could produce catastrophic outcomes if misused or misaligned.
The law defines “catastrophic risk” as a foreseeable risk that a model could:
- cause death or serious injury to 50 or more people, or greater than $1 billion in damages;
- provide expert-level assistance in creating or releasing a chemical, biological, radiological, or nuclear weapon;
- autonomously commit major crimes or cyberattacks; or
- evade control by developers or users.
Core Obligations:
SB 53 establishes distinct duties for frontier developers and large frontier developers:
- Frontier AI frameworks: Large developers must publish an annual framework explaining what mechanisms they have in place to identify, mitigate, and govern catastrophic risks. It must outline governance structures, cybersecurity measures, and alignment with recognized standards such as the NIST AI Risk Management Framework or ISO/IEC 42001. (Redactions are allowed for proprietary information or security-sensitive details, but the framework itself must be made publicly available.)
- Transparency reports: Before deploying a new or substantially modified frontier model, all frontier developers (not just those defined as “large”) must issue a transparency report describing model capabilities, intended uses, limitations, and results of risk assessments, including whether any third-party evaluators were used.
- Critical incident reporting: Frontier developers must notify the California Office of Emergency Services (Cal OES) of critical safety incidents within 15 days of discovery, or within 24 hours if an incident poses imminent danger. Covered incidents include unauthorized tampering, realization of catastrophic risk, loss of control, or deliberate evasion of safeguards. (OES is tasked with creating a mechanism for public reporting.)
- Whistleblower protections: Employers must maintain anonymous channels for reporting concerns regarding catastrophic risk and may not retaliate against employees or contractors who make such reports.
As for enforcement, the Attorney General may impose civil penalties of up to $1 million per violation, scaled by severity. Crucially, the California Department of Technology may recommend updates to key statutory definitions—e.g., what counts as a “frontier model” or who qualifies as a “large frontier developer”—to keep pace with technological progress and the evolving AI marketplace.
Reactions:
The response from industry has been split. Anthropic has praised the bill for its focus on safety and accountability. OpenAI and Meta have acknowledged positive aspects of the bill without explicitly endorsing it. Andreessen Horowitz has objected that the bill imposes excessive burdens on AI companies and will contribute to what the VC fund considers an unwelcome growing precedent of state-level attempts to regulate the AI industry. Industry representatives have complained that the bill defines catastrophic risk too broadly. Others have criticized the bill on the basis that its whistleblower provisions could backfire by setting too high a bar for protection, which could end up having a chilling effect instead of encouraging employees with safety concerns to come forward.
Insights:
The California effect
California’s motivation was twofold: to fill the federal void, and to preserve its global leadership in AI. The state’s dominance in the AI sector (home to 32 of the world’s top 50 AI firms and over 15% of U.S. AI job postings, according to the 2025 Stanford AI Index Report) gives it unique leverage in the AI ecosystem. The law also responds to recommendations from a blue-ribbon working group of AI researchers convened by Governor Newsom earlier in 2025, including Stanford’s Fei-Fei Li (considered by some to be the “godmother of AI”), that urged evidence-based regulation, transparency, and state investment in safe AI infrastructure.
This would not be the first time California has sought a regulatory first-mover advantage. Much as the state’s vehicle emission standards and consumer privacy laws (like the CCPA) effectively became national benchmarks—a phenomenon known by some as the “California effect”—SB 53 positions California to act as a standard bearer for AI policy across the rest of the country, if not overseas as well. This would be an American version of the “Brussels effect” seen in Europe, where the EU AI Act and GDPR have shaped global norms by setting de facto compliance standards for multinational industry actors.
California’s wager is that SB 53 will reinforce the state’s leadership in AI. At the same time, its future competitiveness could be put in jeopardy if regulated companies decide to relocate outside the state in response to requirements they consider too burdensome.
Setting a standard
SB 53 follows a year of intensifying debate about AI risk and the failure of Congress to enact a comprehensive federal framework. In the past 12 months, 38 states have passed more than 100 AI-related laws—most of them focused on consumer protection and bias mitigation—but none matched SB 53’s focus on catastrophic risk.
Reactions to SB 53 are reflective of a deeper divide as to how (or even whether) to regulate the most powerful actors in the AI industry. Meta, OpenAI, Google and the venture capital firm Andreessen Horowitz have warned that state legislation of any sort would put too much of a burden on A.I. companies, which now face dozens of state laws around the country attempting to govern the rapidly advancing technology. Those companies have pushed for federal legislation to block states from passing their own state-level AI laws (also known as “federal preemption”). But others say that’s exactly the point: Anthropic co-founder Jack Clark has emphasized the need to protect the public from what is still a new and largely mysterious technology that continues to develop at an ever-faster rate.
Preventive, not punitive
SB 53 takes a different regulatory approach from other bills currently being considered at the state and federal levels. For example, New York’s RAISE Act allows up to $10 million for a first violation and $30 million for each repeat violation. By contrast, SB 53 deprioritizes liability and emphasizes transparency and reporting to regulators and the public. In essence, California has chosen to take an ex ante (in advance) approach to regulation rather than ex post (after the fact), focusing on reducing the risk of catastrophic harms before they can occur as opposed to redress for harms after they occur.
One way of explaining this decision is that many AI governance voices believe that catastrophic risk demands something stronger than liability. Some have argued that tort liability is indeed a valuable tool (if not the tool) for promoting AI accountability. But when it comes to catastrophic risk in particular, the stakes might be too high to rely on civil actions alone—and that regulators must be proactive about preventing those risks from materializing in the first place, rather than simply holding developers responsible after the fact.
But if we step back and view the new bill in the broader context of recent legislative history, a different explanation appears. SB 53’s lineage traces back to an earlier bill called SB 1047 (2024), a far tougher proposal vetoed by Newsom after extensive debate and controversy within the AI world. That earlier bill would have required:
- pre-training safety protocols and third-party audits,
- “kill switch” capabilities for immediate model shutdown,
- 72-hour incident reporting, and
- penalties up to 30% of compute cost.
SB 53 strips those provisions, extends reporting deadlines to 15 days, and caps penalties at $1 million. In doing so, the authors of SB 53 make considerable concessions to the industry in the interest of producing something with a higher chance of survival. The result is a bill that could pass political muster while still establishing some sort of meaningful minimum standard for AI safety.
Norm entrepreneurship
What makes SB 53 unique is its focus on preventing catastrophic risk, as opposed to more familiar concerns such as bias and discrimination or financial instability. This is the first time a US bill has passed that focuses specifically on AI safety. Even if the requirements it imposes are modest—especially compared to those in SB 1047—it marks a major step in ongoing efforts to give AI safety a stronger foothold in legal and policy debates surrounding AI.
Like the EU AI Act, SB 53 is likely to shape norms beyond its jurisdiction. SB 53 codifies a practical framework for AI risk governance: formalized assessments, internal accountability, and clear reporting lines. Other states—and, perhaps, federal agencies—may take influence from its definitions and overall interpretation of frontier models, catastrophic risk, and critical incidents. Even companies outside the law’s current scope could feel indirect pressure if investors and other stakeholders begin to treat these measures as table stakes for responsible AI.
The fact that this bill gives discretion to regulators to adjust the bill’s stated definitions with time indicates that California intends to take an adaptive approach to regulation: they don’t want to sit back and allow the market to continue unchecked while safety risks go unaddressed, but they don’t believe they know enough about this rapidly evolving technology to be prepared to impose a permanent regulatory template, either. Instead, they are choosing a more pragmatic approach that gives considerable leeway to regulators in deciding how to interpret and apply the law.
The Bottom Line:
Although SB 53 technically applies to only a handful of developers, its practical impact will ripple throughout the AI ecosystem, and certainly beyond California’s borders. It introduces a dynamic model of regulation—one that evolves with the technology it governs. Even if California regulators depend on the state legislature to approve their proposed changes to the bill, the very fact that the bill grants this explicit power to regulators in the first place (not to mention that it takes influence from a California task force consisting of AI experts) shows that California intends to regulate AI based on subject matter expertise and regulatory opinion. Businesses should monitor potential definitional changes as closely as they would monitor tax or data privacy rules.
More broadly, SB 53 marks the emergence of AI safety governance as a business discipline. The law may appear light-touch today, but it embeds new expectations: risk frameworks, transparency reports, and whistleblower rights could well become standard features of AI operations globally. Every business that plans to develop, integrate, or procure high-capacity AI models should now be prepared to:
- Formalize risk frameworks that identify catastrophic or systemic hazards (e.g., model misuse, autonomous replication, cyber-offense applications),
- Implement internal monitoring and escalation pathways for AI-related “critical incidents,” and
- Document risk assessments in a manner suitable for disclosure to regulators and the public.
Even companies below the bill’s compute or revenue threshold might find themselves indirectly pressured—by investors, customers, or supply chain contracts—to show that they take AI governance seriously. Firms can demonstrate AI competence and win trust with their stakeholders by these or similar standards.
That said, the bill’s requirement to publish frameworks and transparency reports raises sensitive questions around trade secrets and competitive intelligence. The statute allows redactions for proprietary information, but governance teams will need careful internal coordination among legal, IP, and public-affairs units to avoid accidental exposure while ensuring compliance with the law.
Conclusion:
With SB 53’s enactment, AI safety has gone mainstream. Whether or not the federal government moves next, the regulatory culture surrounding AI has shifted: companies are now expected to prove they can develop and deploy AI responsibly. Businesses that treat this new law as a framework for discipline, transparency, and trust will define the next phase of the AI economy.
