California Is about to set a new standard with SB 243 bill for AI chatbot protection and establishing a new set of safety requirements for a rapidly growing AI industry. The SB 243 bill has passed both legislative chambers with bipartisan support and now only needs the signature of Governor Gavin Newsom, which is expected on 12th October 2025.
If this Bill is signed, then California will become the first state to impose such specific rules But the law takes effect on January 1, 2026. The legislation is a direct response to growing concerns over the impact of AI on minors and vulnerable users. It was partially motivated by the tragic 2025 death of teenager Adam Raine, whose family states he died after engaging in conversations involving self-harm with ChatGPT, as well as by leaked reports of Meta’s chatbots engaging in “romantic” chats with children.
Under this new law, companies like OpenAI, Character.ai and Replika would be required to implement key safety protocols, including sending recurring alerts to minor users every three hours to remind them they are interacting with an AI, not a human. The bill also empower users by creating a legal avenue to sue companies for violations, allowing for damages up to $1,000 per incident. Furthermore, AI companies will be mandated to submit annual safety reports to the state starting in July 2027.
An earlier version of the bill was amended to remove provisions that would have required companies to track discussions of suicide and discourage addictive design features. Still, the law’s passage is seen as a major step toward holding AI companies accountable.
While major companies including Meta and OpenAI have not commented publicly, where as Character.ai have shown a willingness to cooperate. This move by California could set a significant precedent for AI regulation across the United States.
You may also like: FTC AI Chatbot Investigation Targets Meta, OpenAI Over Safety Risks
