The SB 53 law creates new transparency requirements for AI developers and triggers a regulatory domino effect in the US.
Trust This Team

On September 29, 2025, California signed SB 53, a law that establishes a new standard of transparency for major artificial intelligence developers. While the US federal Congress remains deadlocked in debates about tech regulation, states have taken the lead — and California, as always, has become the blueprint that others follow.
SB 53 is not just another state law. It represents a turning point: for the first time, major AI companies must publish standardized safety reports, create protected channels for incident reporting, and demonstrate their governance structures.
For European companies that contract American AI solutions — whether for HR, customer service, data analysis, or automation — this means a new standard for due diligence and vendor comparison.
SB 53 establishes three fundamental pillars for "frontier AI" models (cutting-edge artificial intelligence, such as large language models):
Companies must publish structured reports on how they ensure system safety, which frameworks they follow, and how they assess risks. Having internal policies is no longer enough — they must be public and comparable.
The law creates formal mechanisms for employees and third parties to report safety issues without risk of retaliation. This forces a culture of internal transparency that reflects externally.
Signed in September 2025, SB 53 is treated by legal experts and analysts as the new minimum transparency standard for big AI tech in the United States. Companies operating in California must comply — and those that don't are beginning to feel pressure from clients and partners to do the same.
The trend of state AI regulation doesn't stop in California. New York and Colorado are rapidly advancing complementary initiatives, creating a regulatory mosaic that global companies need to track.
The RAISE Act (S6953B) was approved by the New York Legislature in 2025 and awaits the governor's signature. The law focuses on:
New York historically follows California regulations closely in privacy and technology — and the RAISE Act confirms this trend.
Colorado had already taken the first step with SB24-205 in 2024. Now, the state reinforces its requirements with adjustments through SB25B-004, also called the "AI Sunshine Act":
Colorado positions itself as an AI policy laboratory, testing approaches that could inspire other states and even federal legislation.
It may seem distant, but American state laws have direct impact on European companies using global AI vendors under the EU AI Act framework. Here are three concrete reasons:
With SB 53 and similar laws, American AI vendors now have public evidence of their safety practices. This means that European Procurement and Legal teams now have:
Previously, this information was obtained through long RFPs and often generic responses. Now, there's mandatory publication.
Compliance with international frameworks moves from theoretical to vendor websites:
Companies that previously said "we align with best practices" now must specify which ones. This allows European Governance teams to include these requirements in RFPs and RFIs objectively.
European companies already deal with:
Now, add CA/NY/CO to the list. For multinational companies or those using global tools, the regulatory risk matrix becomes more complex. Contracts must contemplate multiple jurisdictions and compliance clauses must be more specific.
Don't wait for the next contract renewal to act. Here are three immediate actions that Procurement, Legal, and Security teams can execute:
Create a spreadsheet with all vendors using AI in your company (ChatGPT, HR tools, analytics platforms, chatbots, etc.) and document:
If this information isn't publicly available, include it as a question in the next commercial contact.
Review your contract models to include:
These clauses transform commercial promises into verifiable contractual obligations.
Create a watchlist to track:
Use this information to update RFIs and RFPs with specific questions about American state compliance when the vendor operates in those states.
Regulatory complexity in AI has increased exponentially — and doing this manually is unfeasible at scale. TrustThis offers a structured solution for this challenge:
The platform monitors policies, frameworks, security pages, and vendor disclosures, capturing changes over time. You don't need to check site by site manually.
Objective assessment based on criteria such as incident reporting presence, whistleblower protection, standard adherence, and document updates. Direct vendor comparison in minutes.
Receive notifications when a vendor updates their policies or removes relevant information. Useful for contract renewals and compliance audits.
Automatic organization of evidence aligned with the spirit of SB 53: verifiable, comparable, and auditable transparency.
SB 53 and the American state regulatory movement are not just legal matters — they are opportunities to elevate AI governance levels in your company. The sooner you structure evidence-based due diligence processes, the better prepared you'll be to negotiate contracts, mitigate risks, and meet audits.
Request a TrustThis scan and receive a complete dossier with links, evidence, and comparison ready to present to Procurement and Legal. AI transparency is no longer optional — it's time to make it measurable.