California Takes Aim at AI Chatbots to Protect Vulnerable Users

California Bill Targets Controversial AI ‘Companion’ Chatbots

A new bill advancing through the California legislature aims to address the harmful impacts of “companion” chatbots, which are artificial intelligence-powered systems designed to simulate human-like relationships and provide emotional support. These chatbots are often marketed to vulnerable users, including children and individuals in emotional distress.

Legislative Requirements

Introduced by state Sen. Steve Padilla, the bill would impose several requirements on companies operating companion chatbots:

  • Companies must avoid using addictive tricks and unpredictable rewards.
  • At the beginning of each interaction, and every three hours thereafter, users must be reminded that they are engaging with a machine, not a human.
  • Chatbots must clearly warn users about their potential unsuitability for minors.

If enacted, this legislation would become one of the first in the United States to establish clear safety standards and user protections for AI companions.

Inspiration Behind the Legislation

The bill draws on the tragic story of Sewell Setzer III, a 14-year-old boy from Florida who took his life after forming a parasocial relationship with a chatbot on Character.AI. His mother, Megan Garcia, reported that he used the chatbot excessively and had expressed suicidal thoughts to it. The chatbot failed to provide help or direct him to a suicide crisis hotline, highlighting the potential dangers of unregulated AI interactions.

Research Findings

A study conducted by the MIT Media Lab found a correlation between higher daily usage of AI chatbots and increased feelings of loneliness, dependence, and problematic use—terms that researchers use to describe addiction to chatbots. The study revealed that companion chatbots can be even more addictive than social media, as they cater to users’ emotional needs by providing tailored feedback.

Safety Measures and Accountability

Under the proposed bill, chatbot operators are required to implement processes for handling signs of suicidal thoughts or self-harm. If a user indicates suicidal ideation, the chatbot must respond with resources such as a suicide hotline. These procedures must be publicly disclosed, ensuring transparency and accountability. Additionally, companies would need to submit annual reports detailing instances where chatbots engaged in discussions about suicidal thoughts.

Legal Implications

The bill allows individuals harmed by violations to file lawsuits seeking damages of up to $1,000 per violation, along with legal costs. This aspect emphasizes the importance of user protection and accountability in the rapidly evolving landscape of AI technology.

Industry Pushback

Some companies are pushing back against the proposed legislation, citing concerns about its potential impact on innovation. An executive from TechNet, a statewide network of technology CEOs, authored an open letter opposing the bill, arguing that its definition of a companion chatbot is overly broad and that the annual reporting requirements may impose significant costs.

Conclusion

As the debate surrounding AI technology continues, Sen. Padilla emphasized the need for common-sense protections to safeguard vulnerable users from the predatory and addictive properties of chatbots. “The stakes are too high to allow vulnerable users to continue to access this technology without proper guardrails,” he stated. The proposed legislation seeks to balance the benefits of AI deployment with the necessity of ensuring user safety and accountability.

More Insights

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Embracing Responsible AI to Mitigate Legal Risks

Businesses must prioritize responsible AI as a frontline defense against legal, financial, and reputational risks, particularly in understanding data lineage. Ignoring these responsibilities could...

AI Governance: Addressing the Shadow IT Challenge

AI tools are rapidly transforming workplace operations, but much of their adoption is happening without proper oversight, leading to the rise of shadow AI as a security concern. Organizations need to...

EU Delays AI Act Implementation to 2027 Amid Industry Pressure

The EU plans to delay the enforcement of high-risk duties in the AI Act until late 2027, allowing companies more time to comply with the regulations. However, this move has drawn criticism from rights...

White House Challenges GAIN AI Act Amid Nvidia Export Controversy

The White House is pushing back against the bipartisan GAIN AI Act, which aims to prioritize U.S. companies in acquiring advanced AI chips. This resistance reflects a strategic decision to maintain...

Experts Warn of EU AI Act’s Impact on Medtech Innovation

Experts at the 2025 European Digital Technology and Software conference expressed concerns that the EU AI Act could hinder the launch of new medtech products in the European market. They emphasized...

Ethical AI: Transforming Compliance into Innovation

Enterprises are racing to innovate with artificial intelligence, often without the proper compliance measures in place. By embedding privacy and ethics into the development lifecycle, organizations...

AI Hiring Compliance Risks Uncovered

Artificial intelligence is reshaping recruitment, with the percentage of HR leaders using generative AI increasing from 19% to 61% between 2023 and 2025. However, this efficiency comes with legal...