Skip to main content

What is California's SB 53 and how does it change AI transparency? Impact on EU AI Act compliance

The SB 53 law creates new transparency requirements for AI developers and triggers a regulatory domino effect in the US.

Trust This Team

Compartir este artículo:
Última actualización: 07 de febrero de 2026
What is California's SB 53 and how does it change AI transparency? Impact on EU AI Act compliance

What is California's SB 53 and how does it change AI transparency? Impact on EU AI Act compliance

Why is California's SB 53 changing the AI transparency game?

On September 29, 2025, California signed SB 53, a law that establishes a new standard of transparency for major artificial intelligence developers. While the US federal Congress remains deadlocked in debates about tech regulation, states have taken the lead — and California, as always, has become the blueprint that others follow.

SB 53 is not just another state law. It represents a turning point: for the first time, major AI companies must publish standardized safety reports, create protected channels for incident reporting, and demonstrate their governance structures.

For European companies that contract American AI solutions — whether for HR, customer service, data analysis, or automation — this means a new standard for due diligence and vendor comparison.

What changes with the SB 53 law for AI developers?

SB 53 establishes three fundamental pillars for "frontier AI" models (cutting-edge artificial intelligence, such as large language models):

Standardized safety and governance disclosures

Companies must publish structured reports on how they ensure system safety, which frameworks they follow, and how they assess risks. Having internal policies is no longer enough — they must be public and comparable.

Incident reporting channel with whistleblower protection

The law creates formal mechanisms for employees and third parties to report safety issues without risk of retaliation. This forces a culture of internal transparency that reflects externally.

New transparency baseline

Signed in September 2025, SB 53 is treated by legal experts and analysts as the new minimum transparency standard for big AI tech in the United States. Companies operating in California must comply — and those that don't are beginning to feel pressure from clients and partners to do the same.

How are New York and Colorado following California's path?

The trend of state AI regulation doesn't stop in California. New York and Colorado are rapidly advancing complementary initiatives, creating a regulatory mosaic that global companies need to track.

What is the focus of New York's RAISE Act?

The RAISE Act (S6953B) was approved by the New York Legislature in 2025 and awaits the governor's signature. The law focuses on:

  • Frontier models: Specific requirements for high-capacity AI systems
  • Safety reports: Mandatory public documentation of risk assessments
  • Incident notification: Defined timelines for communicating failures or breaches

New York historically follows California regulations closely in privacy and technology — and the RAISE Act confirms this trend.

What does the Colorado AI Act add to the regulatory landscape?

Colorado had already taken the first step with SB24-205 in 2024. Now, the state reinforces its requirements with adjustments through SB25B-004, also called the "AI Sunshine Act":

  • Expanded transparency: Requirement for publicly available algorithmic impact assessments
  • Disclosure timelines: Specific timelines for publishing model changes
  • Continuous evolution: Legislation is being improved with a focus on vendor comparability

Colorado positions itself as an AI policy laboratory, testing approaches that could inspire other states and even federal legislation.

Why does this matter for European companies contracting AI under the EU AI Act?

It may seem distant, but American state laws have direct impact on European companies using global AI vendors under the EU AI Act framework. Here are three concrete reasons:

How does the new due diligence standard affect procurement processes?

With SB 53 and similar laws, American AI vendors now have public evidence of their safety practices. This means that European Procurement and Legal teams now have:

  • More data to compare vendors side by side
  • Technical arguments to negotiate more robust contractual clauses
  • Public information to demand notification timelines in case of incidents

Previously, this information was obtained through long RFPs and often generic responses. Now, there's mandatory publication.

Which technical standards are becoming contractual requirements?

Compliance with international frameworks moves from theoretical to vendor websites:

  • NIST AI Risk Management Framework (RMF): American AI risk management framework
  • ISO/IEC 42001: International AI management system standard
  • ISO/IEC 23894: AI risk management guidelines

Companies that previously said "we align with best practices" now must specify which ones. This allows European Governance teams to include these requirements in RFPs and RFIs objectively.

How does multi-jurisdictional pressure affect global contracts?

European companies already deal with:

  • EU AI Act (European AI regulation) for operations in Europe
  • GDPR (European regulation) for data protection
  • Local data protection laws in various EU member states

Now, add CA/NY/CO to the list. For multinational companies or those using global tools, the regulatory risk matrix becomes more complex. Contracts must contemplate multiple jurisdictions and compliance clauses must be more specific.

Practical checklist: what to do this week in your company?

Don't wait for the next contract renewal to act. Here are three immediate actions that Procurement, Legal, and Security teams can execute:

1. Map your AI vendors and collect evidence

Create a spreadsheet with all vendors using AI in your company (ChatGPT, HR tools, analytics platforms, chatbots, etc.) and document:

  • AI framework link: Where does the vendor publish their safety practices?
  • Incident reporting mechanism: How and when do they notify incidents?
  • Security changelog: Is there a public history of security updates?

If this information isn't publicly available, include it as a question in the next commercial contact.

2. Update contract templates with specific clauses

Review your contract models to include:

  • Obligation to maintain updated public reports on AI safety
  • SLA for notification in case of incidents or vulnerabilities
  • Declaration of adherence to NIST/ISO when applicable
  • Right to audit compliance based on declared standards

These clauses transform commercial promises into verifiable contractual obligations.

3. Monitor legislation and replicate requirements in procurement processes

Create a watchlist to track:

  • CA (SB 53): Complementary regulations and compliance deadlines
  • NY (RAISE Act): Governor's signature and implementation calendar
  • CO (AI Act/Sunshine): Updates and requirement expansion

Use this information to update RFIs and RFPs with specific questions about American state compliance when the vendor operates in those states.

How does TrustThis help companies navigate this new scenario?

Regulatory complexity in AI has increased exponentially — and doing this manually is unfeasible at scale. TrustThis offers a structured solution for this challenge:

Automated tracking and versioning

The platform monitors policies, frameworks, security pages, and vendor disclosures, capturing changes over time. You don't need to check site by site manually.

Transparency score per vendor

Objective assessment based on criteria such as incident reporting presence, whistleblower protection, standard adherence, and document updates. Direct vendor comparison in minutes.

Change alerts and versioned history

Receive notifications when a vendor updates their policies or removes relevant information. Useful for contract renewals and compliance audits.

Evidence governance

Automatic organization of evidence aligned with the spirit of SB 53: verifiable, comparable, and auditable transparency.

What's the next step for your company?

SB 53 and the American state regulatory movement are not just legal matters — they are opportunities to elevate AI governance levels in your company. The sooner you structure evidence-based due diligence processes, the better prepared you'll be to negotiate contracts, mitigate risks, and meet audits.

Request a TrustThis scan and receive a complete dossier with links, evidence, and comparison ready to present to Procurement and Legal. AI transparency is no longer optional — it's time to make it measurable.

Sources consulted:

  • Fortune — California governor signs landmark AI safety law SB 53 (Sep.30.2025)
  • Office of the Governor of California — signature and key points of SB 53 (Sep.29.2025)
  • IAPP — SB 53 analysis and disclosure landscape (Oct.01.2025)
  • Morrison & Foerster (MoFo) — transparency baseline for big AI (Oct.01.2025)
  • New York RAISE Act (S6953B) — legislative advancement (2025)
  • Colorado AI Act / adjustments (SB24-205, SB25B-004) — transparency requirements (2024–2025)
#eu-ai-act#ai-transparency#ai-regulation#california-sb-53#frontier-ai

Trust This Team