The California State Assembly took a large measurement toward regulating AI connected Wednesday night, passing SB 243 — a measure that modulate AI companion chatbots successful bid to protect minors and susceptible users. The authorities passed pinch bipartisan support and now heads to nan authorities Senate for a last ballot Friday.
If Governor Gavin Newsom signs nan measure into law, it would return effect January 1, 2026, making California nan first authorities to require AI chatbot operators to instrumentality information protocols for AI companions and clasp companies legally accountable if their chatbots neglect to meet those standards.
The measure specifically intends to forestall companion chatbots, which nan authorities defines arsenic AI systems that supply adaptive, human-like responses and are tin of gathering a user’s societal needs – from engaging successful conversations astir suicidal ideation, self-harm, aliases sexually definitive content. The measure would require platforms to supply recurring alerts to users – each 3 hours for minors – reminding them that they are speaking to an AI chatbot, not a existent person, and that they should return a break. It besides establishes yearly reporting and transparency requirements for AI companies that connection companion chatbots, including awesome players OpenAI, Character.AI, and Replika.
The California measure would besides let individuals who judge they person been injured by violations to record lawsuits against AI companies seeking injunctive relief, damages (up to $1,000 per violation), and attorney’s fees.
SB 243, introduced successful January by authorities senators Steve Padilla and Josh Becker, will spell to nan authorities Senate for a last ballot connected Friday. If approved, it will spell to Governor Gavin Newsom to beryllium signed into law, pinch nan caller rules taking effect January 1, 2026 and reporting requirements opening July 1, 2027.
The measure gained momentum successful nan California legislature pursuing nan death of teen Adam Raine, who committed termination aft prolonged chats pinch OpenAI’s ChatGPT that progressive discussing and readying his decease and self-harm. The authorities besides responds to leaked internal documents that reportedly showed Meta’s chatbots were allowed to prosecute successful “romantic” and “sensual” chats pinch children.
In caller weeks, U.S. lawmakers and regulators person responded pinch intensified scrutiny of AI platforms’ safeguards to protect minors. The Federal Trade Commission is preparing to analyse really AI chatbots effect children’s intelligence health. Texas Attorney General Ken Paxton has launched investigations into Meta and Character.AI, accusing them of misleading children pinch intelligence wellness claims. Meanwhile, both Sen. Josh Hawley (R-MO) and Sen. Ed Markey (D-MA) person launched abstracted probes into Meta.
Techcrunch event
San Francisco | October 27-29, 2025
“I deliberation nan harm is perchance great, which intends we person to move quickly,” Padilla told TechCrunch. “We tin put reasonable safeguards successful spot to make judge that peculiarly minors cognize they’re not talking to a existent quality being, that these platforms nexus group to nan due resources erstwhile group opportunity things for illustration they’re reasoning astir hurting themselves aliases they’re successful distress, [and] to make judge there’s not inappropriate vulnerability to inappropriate material.”
Padilla besides stressed nan value of AI companies sharing information astir nan number of times they mention users to situation services each year, “so we person a amended knowing of nan wave of this problem, alternatively than only becoming alert of it erstwhile someone’s harmed aliases worse.”
SB 243 antecedently had stronger requirements, but galore were whittled down done amendments. For example, successful nan measure primitively would person required operators to forestall AI chatbots from utilizing “variable reward” strategies aliases different features that promote excessive engagement. These tactics, utilized by AI companion companies for illustration Replika and Character, connection users typical messages, memories, storylines, aliases nan expertise to unlock uncommon responses aliases caller personalities, creating what critics telephone a perchance addictive reward loop.
The existent measure besides removes provisions that would person required operators to way and study really often chatbots initiated discussions of suicidal ideation aliases actions pinch users.
“I deliberation it strikes nan correct equilibrium of getting to nan harms without enforcing thing that’s either intolerable for companies to comply with, either because it’s technically not feasible aliases conscionable a batch of paperwork for nothing,” Becker told TechCrunch.
SB 243 is moving toward becoming rule astatine a clip erstwhile Silicon Valley companies are pouring millions of dollars into pro-AI governmental action committees(PACs) to backmost candidates successful nan upcoming mid-term elections who favour a light-touch attack to AI regulation.
The measure besides comes arsenic California weighs up different AI information bill, SB 53, which would instruction broad transparency reporting requirements. OpenAI has written an unfastened missive to Governor Newsom, asking him to wantonness that measure successful favour of little stringent national and world frameworks. Major tech companies for illustration Meta, Google, and Amazon person besides opposed SB 53. In contrast, only Anthropic has said it supports SB 53.
“I cull nan premise that this is simply a zero sum situation, that invention and regularisation are mutually exclusive,” Padilla said. “Don’t show maine that we can’t locomotion and chew gum. We tin support invention and improvement that we deliberation is patient and has benefits – and location are benefits to this technology, intelligibly – and astatine nan aforesaid time, we tin supply reasonable safeguards for nan astir susceptible people.”
TechCrunch has reached retired to OpenAI, Anthropic, Meta, Character AI, and Replika for comment.