California Takes Aim at AI Chatbots to Protect Vulnerable Users

California Bill Targets Controversial AI ‘Companion’ Chatbots

A new bill advancing through the California legislature aims to address the harmful impacts of “companion” chatbots, which are artificial intelligence-powered systems designed to simulate human-like relationships and provide emotional support. These chatbots are often marketed to vulnerable users, including children and individuals in emotional distress.

Legislative Requirements

Introduced by state Sen. Steve Padilla, the bill would impose several requirements on companies operating companion chatbots:

  • Companies must avoid using addictive tricks and unpredictable rewards.
  • At the beginning of each interaction, and every three hours thereafter, users must be reminded that they are engaging with a machine, not a human.
  • Chatbots must clearly warn users about their potential unsuitability for minors.

If enacted, this legislation would become one of the first in the United States to establish clear safety standards and user protections for AI companions.

Inspiration Behind the Legislation

The bill draws on the tragic story of Sewell Setzer III, a 14-year-old boy from Florida who took his life after forming a parasocial relationship with a chatbot on Character.AI. His mother, Megan Garcia, reported that he used the chatbot excessively and had expressed suicidal thoughts to it. The chatbot failed to provide help or direct him to a suicide crisis hotline, highlighting the potential dangers of unregulated AI interactions.

Research Findings

A study conducted by the MIT Media Lab found a correlation between higher daily usage of AI chatbots and increased feelings of loneliness, dependence, and problematic use—terms that researchers use to describe addiction to chatbots. The study revealed that companion chatbots can be even more addictive than social media, as they cater to users’ emotional needs by providing tailored feedback.

Safety Measures and Accountability

Under the proposed bill, chatbot operators are required to implement processes for handling signs of suicidal thoughts or self-harm. If a user indicates suicidal ideation, the chatbot must respond with resources such as a suicide hotline. These procedures must be publicly disclosed, ensuring transparency and accountability. Additionally, companies would need to submit annual reports detailing instances where chatbots engaged in discussions about suicidal thoughts.

Legal Implications

The bill allows individuals harmed by violations to file lawsuits seeking damages of up to $1,000 per violation, along with legal costs. This aspect emphasizes the importance of user protection and accountability in the rapidly evolving landscape of AI technology.

Industry Pushback

Some companies are pushing back against the proposed legislation, citing concerns about its potential impact on innovation. An executive from TechNet, a statewide network of technology CEOs, authored an open letter opposing the bill, arguing that its definition of a companion chatbot is overly broad and that the annual reporting requirements may impose significant costs.

Conclusion

As the debate surrounding AI technology continues, Sen. Padilla emphasized the need for common-sense protections to safeguard vulnerable users from the predatory and addictive properties of chatbots. “The stakes are too high to allow vulnerable users to continue to access this technology without proper guardrails,” he stated. The proposed legislation seeks to balance the benefits of AI deployment with the necessity of ensuring user safety and accountability.

More Insights

AI Regulations: Comparing the EU’s AI Act with Australia’s Approach

Global companies need to navigate the differing AI regulations in the European Union and Australia, with the EU's AI Act setting stringent requirements based on risk levels, while Australia adopts a...

Quebec’s New AI Guidelines for Higher Education

Quebec has released its AI policy for universities and Cégeps, outlining guidelines for the responsible use of generative AI in higher education. The policy aims to address ethical considerations and...

AI Literacy: The Compliance Imperative for Businesses

As AI adoption accelerates, regulatory expectations are rising, particularly with the EU's AI Act, which mandates that all staff must be AI literate. This article emphasizes the importance of...

Germany’s Approach to Implementing the AI Act

Germany is moving forward with the implementation of the EU AI Act, designating the Federal Network Agency (BNetzA) as the central authority for monitoring compliance and promoting innovation. The...

Global Call for AI Safety Standards by 2026

World leaders and AI pioneers are calling on the United Nations to implement binding global safeguards for artificial intelligence by 2026. This initiative aims to address the growing concerns...

Governance in the Era of AI and Zero Trust

In 2025, AI has transitioned from mere buzz to practical application across various industries, highlighting the urgent need for a robust governance framework aligned with the zero trust economy...

AI Governance Shift: From Regulation to Technical Secretariat

The upcoming governance framework on artificial intelligence in India may introduce a "technical secretariat" to coordinate AI policies across government departments, moving away from the previous...

AI Safety as a Catalyst for Innovation in Global Majority Nations

The commentary discusses the tension between regulating AI for safety and promoting innovation, emphasizing that investments in AI safety and security can foster sustainable development in Global...

ASEAN’s AI Governance: Charting a Distinct Path

ASEAN's approach to AI governance is characterized by a consensus-driven, voluntary, and principles-based framework that allows member states to navigate their unique challenges and capacities...