California’s Groundbreaking Companion Chatbot Regulation Takes Effect

California Companion Chatbot Law Now in Effect

Key Takeaways

  • A first-of-its-kind statute went into effect in California at the start of the year, imposing operational and reporting requirements related to companion chatbots.
  • The law applies only to “companion chatbots” and excludes many customer service and business operations tools, video game chatbots, and voice-activated assistants.
  • Operators of covered companion chatbots must implement user disclosures, suicide and self-harm safety protocols, and, when they know a user is a minor, additional safeguards.
  • Beginning in 2027, operators of covered companion chatbots must submit annual reports describing crisis referrals and suicide- and self-harm-related safety protocols.

Introduction

Over recent years, states have increasingly experimented with regulating how chatbots and other AI-driven conversational tools are used in consumer-facing contexts. Early efforts focused largely on transparency, requiring businesses to disclose when users were interacting with automated rather than human agents.

California Companion Chatbot Law

The California Companion Chatbot Law, or California Senate Bill 243, went into effect on January 1, 2026. This law reflects a new phase of regulation: in addition to disclosure requirements, it imposes safety, governance, and reporting obligations. This new direction responds to concerns about how certain chatbots influence user behavior, emotional well-being, and decision-making over time, especially regarding minors.

What Types of Chatbots Are Covered?

The Companion Chatbot Law does not apply to all chatbots or conversational AI tools. Instead, it regulates a narrower category referred to as “companion chatbots.” These chatbots respond to users with adaptive, human-like responses and are designed to engage users to meet social or emotional needs. Chatbots that do not meet these criteria—either because they do not maintain a relationship across multiple interactions or are not capable of eliciting emotional or social engagement—fall outside the law’s definition and are not subject to its requirements.

The law explicitly excludes:

  • Customer service chatbots: Used solely for customer service, business operations, or technical assistance.
  • Video game chatbots: Those operating within video games, provided their responses are limited to the game itself and do not discuss mental health or self-harm.
  • Voice-activated assistants: Devices that do not maintain a relationship across interactions or generate outputs likely to elicit emotional responses.

Core Operational Requirements

The Companion Chatbot Law imposes operational obligations on operators of covered companion chatbots. Some apply to all users, while others apply only when the operator knows the user is a minor.

  • Required disclosure: Operators must clearly notify users that they are interacting with an AI system if the user could be misled into thinking they are interacting with a human.
  • Required safety protocols: Operators must prevent the chatbot from producing content related to suicidal ideation, suicide, or self-harm, and must refer users to crisis service providers.
  • Minor suitability disclosure: Operators must disclose that the companion chatbot may not be suitable for some minors.
  • Additional requirements for minors: When a user is identified as a minor, operators must provide notifications reminding them of the AI’s nature and encourage breaks every three hours.

Reporting and Transparency Obligations

Beginning July 1, 2027, operators must annually report to California’s Office of Suicide Prevention regarding:

  • The number of crisis service referrals made in the previous year.
  • Protocols to detect and respond to instances of suicidal ideation by users.
  • Protocols to prohibit responses related to suicidal ideation or actions.

This information will be published by the Office of Suicide Prevention on its website.

Enforcement and Liability Considerations

The Companion Chatbot Law creates a private right of action, allowing users to bring civil actions against operators who violate the law’s requirements, seeking injunctive relief, actual damages, and attorney’s fees.

Looking Ahead

Companies deploying chatbots should assess their functionality to determine if the Companion Chatbot Law applies and implement necessary operational changes. As a first-of-its-kind statute, it signals likely continued experimentation by states exploring AI regulation.

Organizations should also prepare for potential federal regulations that may preempt state laws following renewed federal interest in AI governance. Companies must monitor both state and federal developments to stay compliant with evolving regulations.

More Insights

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Embracing Responsible AI to Mitigate Legal Risks

Businesses must prioritize responsible AI as a frontline defense against legal, financial, and reputational risks, particularly in understanding data lineage. Ignoring these responsibilities could...

AI Governance: Addressing the Shadow IT Challenge

AI tools are rapidly transforming workplace operations, but much of their adoption is happening without proper oversight, leading to the rise of shadow AI as a security concern. Organizations need to...

EU Delays AI Act Implementation to 2027 Amid Industry Pressure

The EU plans to delay the enforcement of high-risk duties in the AI Act until late 2027, allowing companies more time to comply with the regulations. However, this move has drawn criticism from rights...

White House Challenges GAIN AI Act Amid Nvidia Export Controversy

The White House is pushing back against the bipartisan GAIN AI Act, which aims to prioritize U.S. companies in acquiring advanced AI chips. This resistance reflects a strategic decision to maintain...

Experts Warn of EU AI Act’s Impact on Medtech Innovation

Experts at the 2025 European Digital Technology and Software conference expressed concerns that the EU AI Act could hinder the launch of new medtech products in the European market. They emphasized...

Ethical AI: Transforming Compliance into Innovation

Enterprises are racing to innovate with artificial intelligence, often without the proper compliance measures in place. By embedding privacy and ethics into the development lifecycle, organizations...

AI Hiring Compliance Risks Uncovered

Artificial intelligence is reshaping recruitment, with the percentage of HR leaders using generative AI increasing from 19% to 61% between 2023 and 2025. However, this efficiency comes with legal...