California State Senator Scott Wiener connected Wednesday introduced new amendments to his latest bill, SB 53, that would require nan world’s largest AI companies to people information and information protocols and rumor reports erstwhile information incidents occur.
If signed into law, California would beryllium nan first authorities to enforce meaningful transparency requirements onto starring AI developers, apt including OpenAI, Google, Anthropic, and xAI.
Senator Wiener’s previous AI bill, SB 1047, included akin requirements for AI exemplary developers to people information reports. However, Silicon Valley fought ferociously against that bill, and it was ultimately vetoed by Governor Gavin Newsom. California’s Governor past called for a group of AI leaders — including nan starring Stanford interrogator co-founder of World Labs Fei Fei Li — to shape a argumentation group and group goals for nan state’s AI information efforts.
California’s AI argumentation group precocious published their final recommendations, citing a request for “requirements connected manufacture to people accusation astir their systems” successful bid to found a “robust and transparent grounds environment.” Senator Wiener’s agency said successful a property merchandise that SB 53’s amendments were heavy influenced by this report.
“The measure continues to beryllium a activity successful progress, and I look guardant to moving pinch each stakeholders successful nan coming weeks to refine this connection into nan astir technological and adjacent rule it tin be,” Senator Wiener said successful nan release.
SB 53 intends to onslaught a equilibrium that Governor Newsom claimed SB 1047 grounded to execute — ideally, creating meaningful transparency requirements for nan largest AI developers without thwarting nan accelerated maturation of California’s AI industry.
“These are concerns that my statement and others person been talking astir for a while,” said Nathan Calvin, VP of State Affairs for nan nonprofit AI information group, Encode, successful an question and reply pinch TechCrunch. “Having companies explicate to nan nationalist and authorities what measures they’re taking to reside these risks feels for illustration a bare minimum, reasonable measurement to take.”
The measure besides creates whistleblower protections for labor of AI labs who judge their company’s exertion poses a “critical risk” to nine — defined successful nan measure arsenic contributing to nan decease aliases wounded of much than 100 people, aliases much than $1 cardinal successful damage.
Additionally, nan measure intends to create CalCompute, a nationalist unreality computing cluster to support startups and researchers processing large-scale AI.
With nan caller amendments, SB 53 is now headed to California State Assembly Committee connected Privacy and Consumer Protection for approval. Should it walk there, nan measure will besides request to walk done respective different legislative bodies earlier reaching Governor Newsom’s desk.
On nan different broadside of nan U.S., New York Governor Kathy Hochul is now considering a akin AI information bill, nan RAISE Act, which would besides require ample AI developers to people information and information reports.
The destiny of authorities AI laws for illustration nan RAISE Act and SB 53 were concisely successful jeopardy arsenic federal lawmakers considered a 10-year AI moratorium connected authorities AI regulation — an effort to limit a “patchwork” of AI laws that companies would person to navigate. However, that connection failed successful a 99-1 Senate ballot earlier successful July.
“Ensuring AI is developed safely should not beryllium arguable — it should beryllium foundational,” said Geoff Ralston, nan erstwhile president of Y Combinator, successful a connection to TechCrunch. “Congress should beryllium leading, demanding transparency and accountability from nan companies building frontier models. But pinch nary superior national action successful sight, states must measurement up. California’s SB 53 is simply a thoughtful, well-structured illustration of authorities leadership.”
Up to this point, lawmakers person grounded to get AI companies connected committee pinch state-mandated transparency requirements. Anthropic has broadly endorsed the request for accrued transparency into AI companies, and moreover expressed modest optimism astir nan recommendations from California’s AI argumentation group. But companies specified arsenic OpenAI, Google, and Meta person been much resistant to these efforts.
Leading AI exemplary developers typically people information reports for their AI models, but they’ve been little accordant successful caller months. Google, for example, decided not to people a information study for its astir precocious AI exemplary ever released, Gemini 2.5 Pro, until months aft it was made available. OpenAI besides decided not to people a information study for its GPT-4.1 model. Later, a third-party study came retired that suggested it whitethorn beryllium less aligned than erstwhile AI models.
SB 53 represents a toned-down type of erstwhile AI information bills, but it still could unit AI companies to people much accusation than they do today. For now, they’ll beryllium watching intimately arsenic Senator Wiener erstwhile again tests those boundaries.
Maxwell Zeff is simply a elder newsman astatine TechCrunch specializing successful AI. Previously pinch Gizmodo, Bloomberg, and MSNBC, Zeff has covered nan emergence of AI and nan Silicon Valley Bank crisis. He is based successful San Francisco. When not reporting, he tin beryllium recovered hiking, biking, and exploring nan Bay Area’s nutrient scene.