California Bill Targets Controversial AI ‘Companion’ Chatbots
A new bill advancing through the California legislature aims to address the harmful impacts of “companion” chatbots, which are artificial intelligence-powered systems designed to simulate human-like relationships and provide emotional support. These chatbots are often marketed to vulnerable users, including children and individuals in emotional distress.
Legislative Requirements
Introduced by state Sen. Steve Padilla, the bill would impose several requirements on companies operating companion chatbots:
- Companies must avoid using addictive tricks and unpredictable rewards.
- At the beginning of each interaction, and every three hours thereafter, users must be reminded that they are engaging with a machine, not a human.
- Chatbots must clearly warn users about their potential unsuitability for minors.
If enacted, this legislation would become one of the first in the United States to establish clear safety standards and user protections for AI companions.
Inspiration Behind the Legislation
The bill draws on the tragic story of Sewell Setzer III, a 14-year-old boy from Florida who took his life after forming a parasocial relationship with a chatbot on Character.AI. His mother, Megan Garcia, reported that he used the chatbot excessively and had expressed suicidal thoughts to it. The chatbot failed to provide help or direct him to a suicide crisis hotline, highlighting the potential dangers of unregulated AI interactions.
Research Findings
A study conducted by the MIT Media Lab found a correlation between higher daily usage of AI chatbots and increased feelings of loneliness, dependence, and problematic use—terms that researchers use to describe addiction to chatbots. The study revealed that companion chatbots can be even more addictive than social media, as they cater to users’ emotional needs by providing tailored feedback.
Safety Measures and Accountability
Under the proposed bill, chatbot operators are required to implement processes for handling signs of suicidal thoughts or self-harm. If a user indicates suicidal ideation, the chatbot must respond with resources such as a suicide hotline. These procedures must be publicly disclosed, ensuring transparency and accountability. Additionally, companies would need to submit annual reports detailing instances where chatbots engaged in discussions about suicidal thoughts.
Legal Implications
The bill allows individuals harmed by violations to file lawsuits seeking damages of up to $1,000 per violation, along with legal costs. This aspect emphasizes the importance of user protection and accountability in the rapidly evolving landscape of AI technology.
Industry Pushback
Some companies are pushing back against the proposed legislation, citing concerns about its potential impact on innovation. An executive from TechNet, a statewide network of technology CEOs, authored an open letter opposing the bill, arguing that its definition of a companion chatbot is overly broad and that the annual reporting requirements may impose significant costs.
Conclusion
As the debate surrounding AI technology continues, Sen. Padilla emphasized the need for common-sense protections to safeguard vulnerable users from the predatory and addictive properties of chatbots. “The stakes are too high to allow vulnerable users to continue to access this technology without proper guardrails,” he stated. The proposed legislation seeks to balance the benefits of AI deployment with the necessity of ensuring user safety and accountability.